00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 636 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3296 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.072 The recommended git tool is: git 00:00:00.072 using credential 00000000-0000-0000-0000-000000000002 00:00:00.074 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.112 Fetching changes from the remote Git repository 00:00:00.113 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.145 Using shallow fetch with depth 1 00:00:00.145 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.145 > git --version # timeout=10 00:00:00.187 > git --version # 'git version 2.39.2' 00:00:00.187 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.216 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.216 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.730 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.743 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.757 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:02.757 > git config core.sparsecheckout # timeout=10 00:00:02.770 > git read-tree -mu HEAD # timeout=10 00:00:02.786 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:02.806 Commit message: "packer: Add bios builder" 00:00:02.807 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:02.912 [Pipeline] Start of Pipeline 00:00:02.926 [Pipeline] library 00:00:02.928 Loading library shm_lib@master 00:00:02.928 Library shm_lib@master is cached. Copying from home. 00:00:02.946 [Pipeline] node 00:00:02.955 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:02.957 [Pipeline] { 00:00:02.970 [Pipeline] catchError 00:00:02.972 [Pipeline] { 00:00:02.988 [Pipeline] wrap 00:00:02.999 [Pipeline] { 00:00:03.011 [Pipeline] stage 00:00:03.012 [Pipeline] { (Prologue) 00:00:03.195 [Pipeline] sh 00:00:03.478 + logger -p user.info -t JENKINS-CI 00:00:03.492 [Pipeline] echo 00:00:03.493 Node: GP11 00:00:03.497 [Pipeline] sh 00:00:03.827 [Pipeline] setCustomBuildProperty 00:00:03.837 [Pipeline] echo 00:00:03.839 Cleanup processes 00:00:03.842 [Pipeline] sh 00:00:04.122 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.122 2027039 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.133 [Pipeline] sh 00:00:04.411 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.411 ++ grep -v 'sudo pgrep' 00:00:04.411 ++ awk '{print $1}' 00:00:04.411 + sudo kill -9 00:00:04.411 + true 00:00:04.425 [Pipeline] cleanWs 00:00:04.437 [WS-CLEANUP] Deleting project workspace... 00:00:04.437 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.442 [WS-CLEANUP] done 00:00:04.446 [Pipeline] setCustomBuildProperty 00:00:04.461 [Pipeline] sh 00:00:04.742 + sudo git config --global --replace-all safe.directory '*' 00:00:04.812 [Pipeline] httpRequest 00:00:04.843 [Pipeline] echo 00:00:04.844 Sorcerer 10.211.164.101 is alive 00:00:04.851 [Pipeline] httpRequest 00:00:04.855 HttpMethod: GET 00:00:04.855 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:04.856 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:04.859 Response Code: HTTP/1.1 200 OK 00:00:04.859 Success: Status code 200 is in the accepted range: 200,404 00:00:04.859 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:05.696 [Pipeline] sh 00:00:05.978 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:05.994 [Pipeline] httpRequest 00:00:06.011 [Pipeline] echo 00:00:06.013 Sorcerer 10.211.164.101 is alive 00:00:06.020 [Pipeline] httpRequest 00:00:06.024 HttpMethod: GET 00:00:06.025 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:06.025 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:06.040 Response Code: HTTP/1.1 200 OK 00:00:06.041 Success: Status code 200 is in the accepted range: 200,404 00:00:06.041 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:41.156 [Pipeline] sh 00:00:41.443 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:44.738 [Pipeline] sh 00:00:45.023 + git -C spdk log --oneline -n5 00:00:45.023 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:00:45.023 fc2398dfa raid: clear base bdev configure_cb after executing 00:00:45.023 5558f3f50 raid: complete bdev_raid_create after sb is written 00:00:45.023 d005e023b raid: fix empty slot not updated in sb after resize 00:00:45.023 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:00:45.042 [Pipeline] withCredentials 00:00:45.053 > git --version # timeout=10 00:00:45.065 > git --version # 'git version 2.39.2' 00:00:45.084 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:45.086 [Pipeline] { 00:00:45.095 [Pipeline] retry 00:00:45.098 [Pipeline] { 00:00:45.114 [Pipeline] sh 00:00:45.400 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:45.414 [Pipeline] } 00:00:45.434 [Pipeline] // retry 00:00:45.438 [Pipeline] } 00:00:45.457 [Pipeline] // withCredentials 00:00:45.468 [Pipeline] httpRequest 00:00:45.500 [Pipeline] echo 00:00:45.501 Sorcerer 10.211.164.101 is alive 00:00:45.510 [Pipeline] httpRequest 00:00:45.515 HttpMethod: GET 00:00:45.515 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:45.516 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:45.529 Response Code: HTTP/1.1 200 OK 00:00:45.529 Success: Status code 200 is in the accepted range: 200,404 00:00:45.529 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:10.140 [Pipeline] sh 00:01:10.428 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:12.350 [Pipeline] sh 00:01:12.637 + git -C dpdk log --oneline -n5 00:01:12.637 eeb0605f11 version: 23.11.0 00:01:12.637 238778122a doc: update release notes for 23.11 00:01:12.637 46aa6b3cfc doc: fix description of RSS features 00:01:12.637 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:12.637 7e421ae345 devtools: support skipping forbid rule check 00:01:12.649 [Pipeline] } 00:01:12.665 [Pipeline] // stage 00:01:12.675 [Pipeline] stage 00:01:12.677 [Pipeline] { (Prepare) 00:01:12.697 [Pipeline] writeFile 00:01:12.715 [Pipeline] sh 00:01:13.000 + logger -p user.info -t JENKINS-CI 00:01:13.013 [Pipeline] sh 00:01:13.300 + logger -p user.info -t JENKINS-CI 00:01:13.314 [Pipeline] sh 00:01:13.600 + cat autorun-spdk.conf 00:01:13.600 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.600 SPDK_TEST_NVMF=1 00:01:13.600 SPDK_TEST_NVME_CLI=1 00:01:13.600 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.600 SPDK_TEST_NVMF_NICS=e810 00:01:13.600 SPDK_TEST_VFIOUSER=1 00:01:13.600 SPDK_RUN_UBSAN=1 00:01:13.600 NET_TYPE=phy 00:01:13.600 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:13.600 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:13.608 RUN_NIGHTLY=1 00:01:13.612 [Pipeline] readFile 00:01:13.637 [Pipeline] withEnv 00:01:13.639 [Pipeline] { 00:01:13.653 [Pipeline] sh 00:01:13.941 + set -ex 00:01:13.941 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:13.941 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.941 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.941 ++ SPDK_TEST_NVMF=1 00:01:13.941 ++ SPDK_TEST_NVME_CLI=1 00:01:13.941 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.941 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.941 ++ SPDK_TEST_VFIOUSER=1 00:01:13.941 ++ SPDK_RUN_UBSAN=1 00:01:13.941 ++ NET_TYPE=phy 00:01:13.941 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:13.941 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:13.941 ++ RUN_NIGHTLY=1 00:01:13.941 + case $SPDK_TEST_NVMF_NICS in 00:01:13.941 + DRIVERS=ice 00:01:13.941 + [[ tcp == \r\d\m\a ]] 00:01:13.941 + [[ -n ice ]] 00:01:13.941 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:13.941 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:13.941 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:13.941 rmmod: ERROR: Module irdma is not currently loaded 00:01:13.941 rmmod: ERROR: Module i40iw is not currently loaded 00:01:13.941 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:13.941 + true 00:01:13.941 + for D in $DRIVERS 00:01:13.941 + sudo modprobe ice 00:01:13.941 + exit 0 00:01:13.951 [Pipeline] } 00:01:13.968 [Pipeline] // withEnv 00:01:13.974 [Pipeline] } 00:01:13.990 [Pipeline] // stage 00:01:14.000 [Pipeline] catchError 00:01:14.002 [Pipeline] { 00:01:14.017 [Pipeline] timeout 00:01:14.017 Timeout set to expire in 50 min 00:01:14.019 [Pipeline] { 00:01:14.035 [Pipeline] stage 00:01:14.037 [Pipeline] { (Tests) 00:01:14.052 [Pipeline] sh 00:01:14.339 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:14.339 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:14.339 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:14.339 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:14.339 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.339 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:14.339 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:14.339 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:14.339 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:14.339 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:14.339 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:14.339 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:14.339 + source /etc/os-release 00:01:14.339 ++ NAME='Fedora Linux' 00:01:14.339 ++ VERSION='38 (Cloud Edition)' 00:01:14.339 ++ ID=fedora 00:01:14.339 ++ VERSION_ID=38 00:01:14.339 ++ VERSION_CODENAME= 00:01:14.339 ++ PLATFORM_ID=platform:f38 00:01:14.339 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:14.339 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:14.339 ++ LOGO=fedora-logo-icon 00:01:14.339 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:14.339 ++ HOME_URL=https://fedoraproject.org/ 00:01:14.339 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:14.339 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:14.339 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:14.339 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:14.339 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:14.339 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:14.339 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:14.339 ++ SUPPORT_END=2024-05-14 00:01:14.339 ++ VARIANT='Cloud Edition' 00:01:14.339 ++ VARIANT_ID=cloud 00:01:14.339 + uname -a 00:01:14.339 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:14.339 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:15.281 Hugepages 00:01:15.281 node hugesize free / total 00:01:15.281 node0 1048576kB 0 / 0 00:01:15.281 node0 2048kB 0 / 0 00:01:15.281 node1 1048576kB 0 / 0 00:01:15.281 node1 2048kB 0 / 0 00:01:15.281 00:01:15.281 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:15.281 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:15.281 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:15.281 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:15.281 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:15.281 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:15.281 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:15.281 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:15.281 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:15.281 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:15.281 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:15.281 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:15.281 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:15.281 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:15.281 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:15.281 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:15.281 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:15.281 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:15.281 + rm -f /tmp/spdk-ld-path 00:01:15.281 + source autorun-spdk.conf 00:01:15.281 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.281 ++ SPDK_TEST_NVMF=1 00:01:15.281 ++ SPDK_TEST_NVME_CLI=1 00:01:15.281 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.281 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.281 ++ SPDK_TEST_VFIOUSER=1 00:01:15.281 ++ SPDK_RUN_UBSAN=1 00:01:15.281 ++ NET_TYPE=phy 00:01:15.281 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:15.281 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:15.281 ++ RUN_NIGHTLY=1 00:01:15.281 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:15.281 + [[ -n '' ]] 00:01:15.281 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.281 + for M in /var/spdk/build-*-manifest.txt 00:01:15.281 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:15.281 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.281 + for M in /var/spdk/build-*-manifest.txt 00:01:15.281 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:15.281 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.281 ++ uname 00:01:15.281 + [[ Linux == \L\i\n\u\x ]] 00:01:15.281 + sudo dmesg -T 00:01:15.541 + sudo dmesg --clear 00:01:15.541 + dmesg_pid=2028364 00:01:15.541 + [[ Fedora Linux == FreeBSD ]] 00:01:15.541 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.541 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.541 + sudo dmesg -Tw 00:01:15.541 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:15.541 + [[ -x /usr/src/fio-static/fio ]] 00:01:15.541 + export FIO_BIN=/usr/src/fio-static/fio 00:01:15.541 + FIO_BIN=/usr/src/fio-static/fio 00:01:15.541 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:15.541 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:15.541 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:15.541 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.541 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.541 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:15.541 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.541 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.541 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.541 Test configuration: 00:01:15.541 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.541 SPDK_TEST_NVMF=1 00:01:15.541 SPDK_TEST_NVME_CLI=1 00:01:15.541 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.541 SPDK_TEST_NVMF_NICS=e810 00:01:15.541 SPDK_TEST_VFIOUSER=1 00:01:15.541 SPDK_RUN_UBSAN=1 00:01:15.541 NET_TYPE=phy 00:01:15.541 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:15.541 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:15.541 RUN_NIGHTLY=1 01:36:57 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:15.541 01:36:57 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:15.541 01:36:57 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:15.541 01:36:57 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:15.541 01:36:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.541 01:36:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.541 01:36:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.541 01:36:57 -- paths/export.sh@5 -- $ export PATH 00:01:15.541 01:36:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.541 01:36:57 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:15.541 01:36:57 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:15.541 01:36:57 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721950617.XXXXXX 00:01:15.541 01:36:57 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721950617.Xw0Xq2 00:01:15.541 01:36:57 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:15.541 01:36:57 -- common/autobuild_common.sh@453 -- $ '[' -n v23.11 ']' 00:01:15.541 01:36:57 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:15.541 01:36:57 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:15.541 01:36:57 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:15.541 01:36:57 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:15.541 01:36:57 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:15.541 01:36:57 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:15.541 01:36:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.541 01:36:57 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:15.541 01:36:57 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:15.541 01:36:57 -- pm/common@17 -- $ local monitor 00:01:15.541 01:36:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.541 01:36:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.541 01:36:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.541 01:36:57 -- pm/common@21 -- $ date +%s 00:01:15.541 01:36:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.542 01:36:57 -- pm/common@21 -- $ date +%s 00:01:15.542 01:36:57 -- pm/common@25 -- $ sleep 1 00:01:15.542 01:36:57 -- pm/common@21 -- $ date +%s 00:01:15.542 01:36:57 -- pm/common@21 -- $ date +%s 00:01:15.542 01:36:57 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721950617 00:01:15.542 01:36:57 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721950617 00:01:15.542 01:36:57 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721950617 00:01:15.542 01:36:57 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721950617 00:01:15.542 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721950617_collect-vmstat.pm.log 00:01:15.542 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721950617_collect-cpu-load.pm.log 00:01:15.542 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721950617_collect-cpu-temp.pm.log 00:01:15.542 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721950617_collect-bmc-pm.bmc.pm.log 00:01:16.482 01:36:58 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:16.482 01:36:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:16.482 01:36:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:16.482 01:36:58 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.482 01:36:58 -- spdk/autobuild.sh@16 -- $ date -u 00:01:16.482 Thu Jul 25 11:36:58 PM UTC 2024 00:01:16.482 01:36:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:16.482 v24.09-pre-321-g704257090 00:01:16.482 01:36:58 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:16.482 01:36:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:16.482 01:36:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:16.482 01:36:58 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:16.482 01:36:58 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:16.482 01:36:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.482 ************************************ 00:01:16.482 START TEST ubsan 00:01:16.482 ************************************ 00:01:16.482 01:36:58 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:16.482 using ubsan 00:01:16.482 00:01:16.482 real 0m0.000s 00:01:16.482 user 0m0.000s 00:01:16.482 sys 0m0.000s 00:01:16.482 01:36:58 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:16.482 01:36:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:16.482 ************************************ 00:01:16.482 END TEST ubsan 00:01:16.482 ************************************ 00:01:16.482 01:36:58 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:16.482 01:36:58 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:16.482 01:36:58 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:16.482 01:36:58 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:16.482 01:36:58 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:16.482 01:36:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.482 ************************************ 00:01:16.482 START TEST build_native_dpdk 00:01:16.482 ************************************ 00:01:16.482 01:36:58 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:16.482 eeb0605f11 version: 23.11.0 00:01:16.482 238778122a doc: update release notes for 23.11 00:01:16.482 46aa6b3cfc doc: fix description of RSS features 00:01:16.482 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:16.482 7e421ae345 devtools: support skipping forbid rule check 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:16.482 01:36:58 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:16.483 01:36:58 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:16.483 01:36:58 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:16.483 01:36:58 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:16.483 01:36:58 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:16.483 01:36:58 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:16.483 01:36:58 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:16.483 01:36:58 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:16.483 01:36:58 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:16.483 01:36:58 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:16.483 01:36:58 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:16.483 01:36:58 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:16.483 01:36:58 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:16.483 01:36:58 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:16.483 01:36:58 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:16.483 01:36:58 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:16.483 01:36:58 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:16.483 01:36:58 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:16.483 01:36:58 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:16.483 01:36:58 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:16.483 01:36:58 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:16.483 01:36:58 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:16.483 01:36:58 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:16.483 01:36:58 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:16.483 01:36:58 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:16.742 01:36:58 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:16.742 patching file config/rte_config.h 00:01:16.742 Hunk #1 succeeded at 60 (offset 1 line). 00:01:16.742 01:36:58 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:16.742 01:36:58 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:01:16.742 01:36:58 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:16.742 patching file lib/pcapng/rte_pcapng.c 00:01:16.742 01:36:58 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:16.742 01:36:58 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:16.742 01:36:58 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:16.742 01:36:58 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:16.742 01:36:58 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:20.939 The Meson build system 00:01:20.939 Version: 1.3.1 00:01:20.939 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:20.939 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:20.939 Build type: native build 00:01:20.939 Program cat found: YES (/usr/bin/cat) 00:01:20.939 Project name: DPDK 00:01:20.939 Project version: 23.11.0 00:01:20.939 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:20.939 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:20.939 Host machine cpu family: x86_64 00:01:20.939 Host machine cpu: x86_64 00:01:20.939 Message: ## Building in Developer Mode ## 00:01:20.939 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:20.939 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:20.939 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:20.939 Program python3 found: YES (/usr/bin/python3) 00:01:20.939 Program cat found: YES (/usr/bin/cat) 00:01:20.939 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:20.939 Compiler for C supports arguments -march=native: YES 00:01:20.939 Checking for size of "void *" : 8 00:01:20.939 Checking for size of "void *" : 8 (cached) 00:01:20.939 Library m found: YES 00:01:20.939 Library numa found: YES 00:01:20.939 Has header "numaif.h" : YES 00:01:20.939 Library fdt found: NO 00:01:20.939 Library execinfo found: NO 00:01:20.939 Has header "execinfo.h" : YES 00:01:20.939 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:20.939 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:20.939 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:20.939 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:20.939 Run-time dependency openssl found: YES 3.0.9 00:01:20.939 Run-time dependency libpcap found: YES 1.10.4 00:01:20.939 Has header "pcap.h" with dependency libpcap: YES 00:01:20.939 Compiler for C supports arguments -Wcast-qual: YES 00:01:20.939 Compiler for C supports arguments -Wdeprecated: YES 00:01:20.939 Compiler for C supports arguments -Wformat: YES 00:01:20.939 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:20.939 Compiler for C supports arguments -Wformat-security: NO 00:01:20.939 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:20.939 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:20.939 Compiler for C supports arguments -Wnested-externs: YES 00:01:20.939 Compiler for C supports arguments -Wold-style-definition: YES 00:01:20.939 Compiler for C supports arguments -Wpointer-arith: YES 00:01:20.939 Compiler for C supports arguments -Wsign-compare: YES 00:01:20.939 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:20.939 Compiler for C supports arguments -Wundef: YES 00:01:20.939 Compiler for C supports arguments -Wwrite-strings: YES 00:01:20.939 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:20.939 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:20.939 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:20.939 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:20.939 Program objdump found: YES (/usr/bin/objdump) 00:01:20.939 Compiler for C supports arguments -mavx512f: YES 00:01:20.939 Checking if "AVX512 checking" compiles: YES 00:01:20.939 Fetching value of define "__SSE4_2__" : 1 00:01:20.939 Fetching value of define "__AES__" : 1 00:01:20.939 Fetching value of define "__AVX__" : 1 00:01:20.939 Fetching value of define "__AVX2__" : (undefined) 00:01:20.939 Fetching value of define "__AVX512BW__" : (undefined) 00:01:20.939 Fetching value of define "__AVX512CD__" : (undefined) 00:01:20.939 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:20.939 Fetching value of define "__AVX512F__" : (undefined) 00:01:20.939 Fetching value of define "__AVX512VL__" : (undefined) 00:01:20.939 Fetching value of define "__PCLMUL__" : 1 00:01:20.939 Fetching value of define "__RDRND__" : 1 00:01:20.939 Fetching value of define "__RDSEED__" : (undefined) 00:01:20.939 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:20.939 Fetching value of define "__znver1__" : (undefined) 00:01:20.939 Fetching value of define "__znver2__" : (undefined) 00:01:20.940 Fetching value of define "__znver3__" : (undefined) 00:01:20.940 Fetching value of define "__znver4__" : (undefined) 00:01:20.940 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:20.940 Message: lib/log: Defining dependency "log" 00:01:20.940 Message: lib/kvargs: Defining dependency "kvargs" 00:01:20.940 Message: lib/telemetry: Defining dependency "telemetry" 00:01:20.940 Checking for function "getentropy" : NO 00:01:20.940 Message: lib/eal: Defining dependency "eal" 00:01:20.940 Message: lib/ring: Defining dependency "ring" 00:01:20.940 Message: lib/rcu: Defining dependency "rcu" 00:01:20.940 Message: lib/mempool: Defining dependency "mempool" 00:01:20.940 Message: lib/mbuf: Defining dependency "mbuf" 00:01:20.940 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:20.940 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:20.940 Compiler for C supports arguments -mpclmul: YES 00:01:20.940 Compiler for C supports arguments -maes: YES 00:01:20.940 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:20.940 Compiler for C supports arguments -mavx512bw: YES 00:01:20.940 Compiler for C supports arguments -mavx512dq: YES 00:01:20.940 Compiler for C supports arguments -mavx512vl: YES 00:01:20.940 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:20.940 Compiler for C supports arguments -mavx2: YES 00:01:20.940 Compiler for C supports arguments -mavx: YES 00:01:20.940 Message: lib/net: Defining dependency "net" 00:01:20.940 Message: lib/meter: Defining dependency "meter" 00:01:20.940 Message: lib/ethdev: Defining dependency "ethdev" 00:01:20.940 Message: lib/pci: Defining dependency "pci" 00:01:20.940 Message: lib/cmdline: Defining dependency "cmdline" 00:01:20.940 Message: lib/metrics: Defining dependency "metrics" 00:01:20.940 Message: lib/hash: Defining dependency "hash" 00:01:20.940 Message: lib/timer: Defining dependency "timer" 00:01:20.940 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:20.940 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:20.940 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:20.940 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:20.940 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:20.940 Message: lib/acl: Defining dependency "acl" 00:01:20.940 Message: lib/bbdev: Defining dependency "bbdev" 00:01:20.940 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:20.940 Run-time dependency libelf found: YES 0.190 00:01:20.940 Message: lib/bpf: Defining dependency "bpf" 00:01:20.940 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:20.940 Message: lib/compressdev: Defining dependency "compressdev" 00:01:20.940 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:20.940 Message: lib/distributor: Defining dependency "distributor" 00:01:20.940 Message: lib/dmadev: Defining dependency "dmadev" 00:01:20.940 Message: lib/efd: Defining dependency "efd" 00:01:20.940 Message: lib/eventdev: Defining dependency "eventdev" 00:01:20.940 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:20.940 Message: lib/gpudev: Defining dependency "gpudev" 00:01:20.940 Message: lib/gro: Defining dependency "gro" 00:01:20.940 Message: lib/gso: Defining dependency "gso" 00:01:20.940 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:20.940 Message: lib/jobstats: Defining dependency "jobstats" 00:01:20.940 Message: lib/latencystats: Defining dependency "latencystats" 00:01:20.940 Message: lib/lpm: Defining dependency "lpm" 00:01:20.940 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:20.940 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:20.940 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:20.940 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:20.940 Message: lib/member: Defining dependency "member" 00:01:20.940 Message: lib/pcapng: Defining dependency "pcapng" 00:01:20.940 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:20.940 Message: lib/power: Defining dependency "power" 00:01:20.940 Message: lib/rawdev: Defining dependency "rawdev" 00:01:20.940 Message: lib/regexdev: Defining dependency "regexdev" 00:01:20.940 Message: lib/mldev: Defining dependency "mldev" 00:01:20.940 Message: lib/rib: Defining dependency "rib" 00:01:20.940 Message: lib/reorder: Defining dependency "reorder" 00:01:20.940 Message: lib/sched: Defining dependency "sched" 00:01:20.940 Message: lib/security: Defining dependency "security" 00:01:20.940 Message: lib/stack: Defining dependency "stack" 00:01:20.940 Has header "linux/userfaultfd.h" : YES 00:01:20.940 Has header "linux/vduse.h" : YES 00:01:20.940 Message: lib/vhost: Defining dependency "vhost" 00:01:20.940 Message: lib/ipsec: Defining dependency "ipsec" 00:01:20.940 Message: lib/pdcp: Defining dependency "pdcp" 00:01:20.940 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:20.940 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:20.940 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:20.940 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:20.940 Message: lib/fib: Defining dependency "fib" 00:01:20.940 Message: lib/port: Defining dependency "port" 00:01:20.940 Message: lib/pdump: Defining dependency "pdump" 00:01:20.940 Message: lib/table: Defining dependency "table" 00:01:20.940 Message: lib/pipeline: Defining dependency "pipeline" 00:01:20.940 Message: lib/graph: Defining dependency "graph" 00:01:20.940 Message: lib/node: Defining dependency "node" 00:01:22.325 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:22.325 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:22.325 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:22.325 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:22.326 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:22.326 Compiler for C supports arguments -Wno-unused-value: YES 00:01:22.326 Compiler for C supports arguments -Wno-format: YES 00:01:22.326 Compiler for C supports arguments -Wno-format-security: YES 00:01:22.326 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:22.326 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:22.326 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:22.326 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:22.326 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:22.326 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:22.326 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:22.326 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:22.326 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:22.326 Has header "sys/epoll.h" : YES 00:01:22.326 Program doxygen found: YES (/usr/bin/doxygen) 00:01:22.326 Configuring doxy-api-html.conf using configuration 00:01:22.326 Configuring doxy-api-man.conf using configuration 00:01:22.326 Program mandb found: YES (/usr/bin/mandb) 00:01:22.326 Program sphinx-build found: NO 00:01:22.326 Configuring rte_build_config.h using configuration 00:01:22.326 Message: 00:01:22.326 ================= 00:01:22.326 Applications Enabled 00:01:22.326 ================= 00:01:22.326 00:01:22.326 apps: 00:01:22.326 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:22.326 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:22.326 test-pmd, test-regex, test-sad, test-security-perf, 00:01:22.326 00:01:22.326 Message: 00:01:22.326 ================= 00:01:22.326 Libraries Enabled 00:01:22.326 ================= 00:01:22.326 00:01:22.326 libs: 00:01:22.326 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:22.326 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:22.326 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:22.326 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:22.326 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:22.326 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:22.326 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:22.326 00:01:22.326 00:01:22.326 Message: 00:01:22.326 =============== 00:01:22.326 Drivers Enabled 00:01:22.326 =============== 00:01:22.326 00:01:22.326 common: 00:01:22.326 00:01:22.326 bus: 00:01:22.326 pci, vdev, 00:01:22.326 mempool: 00:01:22.326 ring, 00:01:22.326 dma: 00:01:22.326 00:01:22.326 net: 00:01:22.326 i40e, 00:01:22.326 raw: 00:01:22.326 00:01:22.326 crypto: 00:01:22.326 00:01:22.326 compress: 00:01:22.326 00:01:22.326 regex: 00:01:22.326 00:01:22.326 ml: 00:01:22.326 00:01:22.326 vdpa: 00:01:22.326 00:01:22.326 event: 00:01:22.326 00:01:22.326 baseband: 00:01:22.326 00:01:22.326 gpu: 00:01:22.326 00:01:22.326 00:01:22.326 Message: 00:01:22.326 ================= 00:01:22.326 Content Skipped 00:01:22.326 ================= 00:01:22.326 00:01:22.326 apps: 00:01:22.326 00:01:22.326 libs: 00:01:22.326 00:01:22.326 drivers: 00:01:22.326 common/cpt: not in enabled drivers build config 00:01:22.326 common/dpaax: not in enabled drivers build config 00:01:22.326 common/iavf: not in enabled drivers build config 00:01:22.326 common/idpf: not in enabled drivers build config 00:01:22.326 common/mvep: not in enabled drivers build config 00:01:22.326 common/octeontx: not in enabled drivers build config 00:01:22.326 bus/auxiliary: not in enabled drivers build config 00:01:22.326 bus/cdx: not in enabled drivers build config 00:01:22.326 bus/dpaa: not in enabled drivers build config 00:01:22.326 bus/fslmc: not in enabled drivers build config 00:01:22.326 bus/ifpga: not in enabled drivers build config 00:01:22.326 bus/platform: not in enabled drivers build config 00:01:22.326 bus/vmbus: not in enabled drivers build config 00:01:22.326 common/cnxk: not in enabled drivers build config 00:01:22.326 common/mlx5: not in enabled drivers build config 00:01:22.326 common/nfp: not in enabled drivers build config 00:01:22.326 common/qat: not in enabled drivers build config 00:01:22.326 common/sfc_efx: not in enabled drivers build config 00:01:22.326 mempool/bucket: not in enabled drivers build config 00:01:22.326 mempool/cnxk: not in enabled drivers build config 00:01:22.326 mempool/dpaa: not in enabled drivers build config 00:01:22.326 mempool/dpaa2: not in enabled drivers build config 00:01:22.326 mempool/octeontx: not in enabled drivers build config 00:01:22.326 mempool/stack: not in enabled drivers build config 00:01:22.326 dma/cnxk: not in enabled drivers build config 00:01:22.326 dma/dpaa: not in enabled drivers build config 00:01:22.326 dma/dpaa2: not in enabled drivers build config 00:01:22.326 dma/hisilicon: not in enabled drivers build config 00:01:22.326 dma/idxd: not in enabled drivers build config 00:01:22.326 dma/ioat: not in enabled drivers build config 00:01:22.326 dma/skeleton: not in enabled drivers build config 00:01:22.326 net/af_packet: not in enabled drivers build config 00:01:22.326 net/af_xdp: not in enabled drivers build config 00:01:22.326 net/ark: not in enabled drivers build config 00:01:22.326 net/atlantic: not in enabled drivers build config 00:01:22.326 net/avp: not in enabled drivers build config 00:01:22.326 net/axgbe: not in enabled drivers build config 00:01:22.326 net/bnx2x: not in enabled drivers build config 00:01:22.326 net/bnxt: not in enabled drivers build config 00:01:22.326 net/bonding: not in enabled drivers build config 00:01:22.326 net/cnxk: not in enabled drivers build config 00:01:22.326 net/cpfl: not in enabled drivers build config 00:01:22.326 net/cxgbe: not in enabled drivers build config 00:01:22.326 net/dpaa: not in enabled drivers build config 00:01:22.326 net/dpaa2: not in enabled drivers build config 00:01:22.326 net/e1000: not in enabled drivers build config 00:01:22.326 net/ena: not in enabled drivers build config 00:01:22.326 net/enetc: not in enabled drivers build config 00:01:22.326 net/enetfec: not in enabled drivers build config 00:01:22.326 net/enic: not in enabled drivers build config 00:01:22.326 net/failsafe: not in enabled drivers build config 00:01:22.326 net/fm10k: not in enabled drivers build config 00:01:22.326 net/gve: not in enabled drivers build config 00:01:22.326 net/hinic: not in enabled drivers build config 00:01:22.326 net/hns3: not in enabled drivers build config 00:01:22.326 net/iavf: not in enabled drivers build config 00:01:22.326 net/ice: not in enabled drivers build config 00:01:22.326 net/idpf: not in enabled drivers build config 00:01:22.326 net/igc: not in enabled drivers build config 00:01:22.326 net/ionic: not in enabled drivers build config 00:01:22.326 net/ipn3ke: not in enabled drivers build config 00:01:22.326 net/ixgbe: not in enabled drivers build config 00:01:22.326 net/mana: not in enabled drivers build config 00:01:22.326 net/memif: not in enabled drivers build config 00:01:22.326 net/mlx4: not in enabled drivers build config 00:01:22.326 net/mlx5: not in enabled drivers build config 00:01:22.326 net/mvneta: not in enabled drivers build config 00:01:22.326 net/mvpp2: not in enabled drivers build config 00:01:22.326 net/netvsc: not in enabled drivers build config 00:01:22.326 net/nfb: not in enabled drivers build config 00:01:22.326 net/nfp: not in enabled drivers build config 00:01:22.326 net/ngbe: not in enabled drivers build config 00:01:22.326 net/null: not in enabled drivers build config 00:01:22.326 net/octeontx: not in enabled drivers build config 00:01:22.326 net/octeon_ep: not in enabled drivers build config 00:01:22.326 net/pcap: not in enabled drivers build config 00:01:22.326 net/pfe: not in enabled drivers build config 00:01:22.326 net/qede: not in enabled drivers build config 00:01:22.326 net/ring: not in enabled drivers build config 00:01:22.326 net/sfc: not in enabled drivers build config 00:01:22.326 net/softnic: not in enabled drivers build config 00:01:22.326 net/tap: not in enabled drivers build config 00:01:22.326 net/thunderx: not in enabled drivers build config 00:01:22.326 net/txgbe: not in enabled drivers build config 00:01:22.326 net/vdev_netvsc: not in enabled drivers build config 00:01:22.326 net/vhost: not in enabled drivers build config 00:01:22.326 net/virtio: not in enabled drivers build config 00:01:22.326 net/vmxnet3: not in enabled drivers build config 00:01:22.326 raw/cnxk_bphy: not in enabled drivers build config 00:01:22.326 raw/cnxk_gpio: not in enabled drivers build config 00:01:22.326 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:22.326 raw/ifpga: not in enabled drivers build config 00:01:22.326 raw/ntb: not in enabled drivers build config 00:01:22.326 raw/skeleton: not in enabled drivers build config 00:01:22.326 crypto/armv8: not in enabled drivers build config 00:01:22.326 crypto/bcmfs: not in enabled drivers build config 00:01:22.326 crypto/caam_jr: not in enabled drivers build config 00:01:22.326 crypto/ccp: not in enabled drivers build config 00:01:22.326 crypto/cnxk: not in enabled drivers build config 00:01:22.326 crypto/dpaa_sec: not in enabled drivers build config 00:01:22.326 crypto/dpaa2_sec: not in enabled drivers build config 00:01:22.326 crypto/ipsec_mb: not in enabled drivers build config 00:01:22.326 crypto/mlx5: not in enabled drivers build config 00:01:22.326 crypto/mvsam: not in enabled drivers build config 00:01:22.326 crypto/nitrox: not in enabled drivers build config 00:01:22.326 crypto/null: not in enabled drivers build config 00:01:22.326 crypto/octeontx: not in enabled drivers build config 00:01:22.327 crypto/openssl: not in enabled drivers build config 00:01:22.327 crypto/scheduler: not in enabled drivers build config 00:01:22.327 crypto/uadk: not in enabled drivers build config 00:01:22.327 crypto/virtio: not in enabled drivers build config 00:01:22.327 compress/isal: not in enabled drivers build config 00:01:22.327 compress/mlx5: not in enabled drivers build config 00:01:22.327 compress/octeontx: not in enabled drivers build config 00:01:22.327 compress/zlib: not in enabled drivers build config 00:01:22.327 regex/mlx5: not in enabled drivers build config 00:01:22.327 regex/cn9k: not in enabled drivers build config 00:01:22.327 ml/cnxk: not in enabled drivers build config 00:01:22.327 vdpa/ifc: not in enabled drivers build config 00:01:22.327 vdpa/mlx5: not in enabled drivers build config 00:01:22.327 vdpa/nfp: not in enabled drivers build config 00:01:22.327 vdpa/sfc: not in enabled drivers build config 00:01:22.327 event/cnxk: not in enabled drivers build config 00:01:22.327 event/dlb2: not in enabled drivers build config 00:01:22.327 event/dpaa: not in enabled drivers build config 00:01:22.327 event/dpaa2: not in enabled drivers build config 00:01:22.327 event/dsw: not in enabled drivers build config 00:01:22.327 event/opdl: not in enabled drivers build config 00:01:22.327 event/skeleton: not in enabled drivers build config 00:01:22.327 event/sw: not in enabled drivers build config 00:01:22.327 event/octeontx: not in enabled drivers build config 00:01:22.327 baseband/acc: not in enabled drivers build config 00:01:22.327 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:22.327 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:22.327 baseband/la12xx: not in enabled drivers build config 00:01:22.327 baseband/null: not in enabled drivers build config 00:01:22.327 baseband/turbo_sw: not in enabled drivers build config 00:01:22.327 gpu/cuda: not in enabled drivers build config 00:01:22.327 00:01:22.327 00:01:22.327 Build targets in project: 220 00:01:22.327 00:01:22.327 DPDK 23.11.0 00:01:22.327 00:01:22.327 User defined options 00:01:22.327 libdir : lib 00:01:22.327 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.327 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:22.327 c_link_args : 00:01:22.327 enable_docs : false 00:01:22.327 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:22.327 enable_kmods : false 00:01:22.327 machine : native 00:01:22.327 tests : false 00:01:22.327 00:01:22.327 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:22.327 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:22.327 01:37:04 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:22.327 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:22.327 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:22.327 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:22.327 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:22.327 [4/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:22.327 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:22.327 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:22.327 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:22.327 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:22.327 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:22.327 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:22.327 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:22.327 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:22.327 [13/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:22.327 [14/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:22.327 [15/710] Linking static target lib/librte_kvargs.a 00:01:22.588 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:22.588 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:22.588 [18/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:22.588 [19/710] Linking static target lib/librte_log.a 00:01:22.588 [20/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:22.851 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:22.851 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.421 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.421 [24/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:23.421 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:23.421 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:23.421 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:23.421 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:23.421 [29/710] Linking target lib/librte_log.so.24.0 00:01:23.421 [30/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:23.421 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:23.421 [32/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:23.421 [33/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:23.421 [34/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:23.421 [35/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:23.421 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:23.421 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:23.421 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:23.421 [39/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:23.421 [40/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:23.421 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:23.421 [42/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:23.421 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:23.421 [44/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:23.421 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:23.421 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:23.683 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:23.683 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:23.683 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:23.683 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:23.683 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:23.683 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:23.683 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:23.683 [54/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:23.683 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:23.683 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:23.683 [57/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:23.683 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:23.683 [59/710] Linking target lib/librte_kvargs.so.24.0 00:01:23.683 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:23.683 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:23.683 [62/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:23.946 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:23.946 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:23.946 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:23.946 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:23.946 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:24.224 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:24.224 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:24.224 [70/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:24.224 [71/710] Linking static target lib/librte_pci.a 00:01:24.224 [72/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:24.224 [73/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:24.224 [74/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:24.224 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:24.516 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:24.516 [77/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.516 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:24.516 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:24.516 [80/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:24.516 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:24.516 [82/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:24.516 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:24.516 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:24.516 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:24.516 [86/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:24.516 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:24.789 [88/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:24.789 [89/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:24.789 [90/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:24.789 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:24.789 [92/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:24.789 [93/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:24.789 [94/710] Linking static target lib/librte_ring.a 00:01:24.789 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:24.789 [96/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:24.789 [97/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:24.789 [98/710] Linking static target lib/librte_meter.a 00:01:24.789 [99/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:24.789 [100/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:24.789 [101/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:24.789 [102/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:24.789 [103/710] Linking static target lib/librte_telemetry.a 00:01:24.789 [104/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:24.789 [105/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:25.050 [106/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:25.050 [107/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:25.050 [108/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:25.050 [109/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:25.050 [110/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:25.050 [111/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:25.050 [112/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:25.051 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:25.314 [114/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:25.314 [115/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.314 [116/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.314 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:25.314 [118/710] Linking static target lib/librte_eal.a 00:01:25.314 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:25.314 [120/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:25.314 [121/710] Linking static target lib/librte_net.a 00:01:25.314 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:25.314 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:25.314 [124/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:25.577 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:25.577 [126/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:25.577 [127/710] Linking static target lib/librte_mempool.a 00:01:25.577 [128/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.577 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:25.577 [130/710] Linking static target lib/librte_cmdline.a 00:01:25.577 [131/710] Linking target lib/librte_telemetry.so.24.0 00:01:25.577 [132/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.840 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:25.840 [134/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:25.840 [135/710] Linking static target lib/librte_cfgfile.a 00:01:25.840 [136/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:25.840 [137/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:25.840 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:25.840 [139/710] Linking static target lib/librte_metrics.a 00:01:25.840 [140/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:25.840 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:25.840 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:25.840 [143/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:26.105 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:26.105 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:26.105 [146/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:26.105 [147/710] Linking static target lib/librte_bitratestats.a 00:01:26.105 [148/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:26.105 [149/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:26.105 [150/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:26.105 [151/710] Linking static target lib/librte_rcu.a 00:01:26.105 [152/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:26.369 [153/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.369 [154/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:26.369 [155/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.369 [156/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:26.370 [157/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:26.370 [158/710] Linking static target lib/librte_timer.a 00:01:26.370 [159/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:26.370 [160/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.370 [161/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:26.633 [162/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:26.633 [163/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.633 [164/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:26.633 [165/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.633 [166/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:26.633 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:26.892 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:26.892 [169/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.892 [170/710] Linking static target lib/librte_bbdev.a 00:01:26.892 [171/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:26.892 [172/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:26.892 [173/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:26.893 [174/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.893 [175/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:27.159 [176/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:27.159 [177/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:27.159 [178/710] Linking static target lib/librte_compressdev.a 00:01:27.159 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:27.159 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:27.423 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:27.423 [182/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:27.423 [183/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:27.423 [184/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:27.423 [185/710] Linking static target lib/librte_distributor.a 00:01:27.686 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:27.686 [187/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:27.686 [188/710] Linking static target lib/librte_bpf.a 00:01:27.686 [189/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:27.686 [190/710] Linking static target lib/librte_dmadev.a 00:01:27.686 [191/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.949 [192/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:27.949 [193/710] Linking static target lib/librte_dispatcher.a 00:01:27.949 [194/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.949 [195/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:27.949 [196/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:27.949 [197/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.949 [198/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:27.949 [199/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:27.949 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:27.949 [201/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:28.211 [202/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:28.211 [203/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:28.211 [204/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:28.211 [205/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:28.211 [206/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:28.211 [207/710] Linking static target lib/librte_gpudev.a 00:01:28.211 [208/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:28.211 [209/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:28.211 [210/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:28.211 [211/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:28.211 [212/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.211 [213/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:28.211 [214/710] Linking static target lib/librte_gro.a 00:01:28.478 [215/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:28.478 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.478 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:28.478 [218/710] Linking static target lib/librte_jobstats.a 00:01:28.478 [219/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:28.478 [220/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:28.478 [221/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.746 [222/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.746 [223/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:28.746 [224/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:29.008 [225/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.008 [226/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:29.008 [227/710] Linking static target lib/librte_latencystats.a 00:01:29.008 [228/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:29.008 [229/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:29.008 [230/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:29.008 [231/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:29.008 [232/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:29.008 [233/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:29.008 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:29.008 [235/710] Linking static target lib/librte_ip_frag.a 00:01:29.008 [236/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:29.271 [237/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:29.271 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.271 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:29.271 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:29.271 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:29.533 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:29.533 [243/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.533 [244/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:29.533 [245/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.533 [246/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:29.533 [247/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:29.792 [248/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:29.792 [249/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:29.792 [250/710] Linking static target lib/librte_gso.a 00:01:29.792 [251/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:29.792 [252/710] Linking static target lib/librte_regexdev.a 00:01:29.792 [253/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:29.792 [254/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:29.792 [255/710] Linking static target lib/librte_rawdev.a 00:01:30.054 [256/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:30.054 [257/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:30.054 [258/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:30.054 [259/710] Linking static target lib/librte_efd.a 00:01:30.054 [260/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:30.054 [261/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:30.054 [262/710] Linking static target lib/librte_mldev.a 00:01:30.054 [263/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.054 [264/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:30.054 [265/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:30.054 [266/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:30.054 [267/710] Linking static target lib/librte_pcapng.a 00:01:30.318 [268/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:30.318 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:30.318 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:30.318 [271/710] Linking static target lib/librte_stack.a 00:01:30.318 [272/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:30.318 [273/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.318 [274/710] Linking static target lib/librte_lpm.a 00:01:30.318 [275/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:30.318 [276/710] Linking static target lib/acl/libavx2_tmp.a 00:01:30.587 [277/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:30.587 [278/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:30.587 [279/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:30.587 [280/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.587 [281/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:30.587 [282/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:30.587 [283/710] Linking static target lib/librte_hash.a 00:01:30.587 [284/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:30.587 [285/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:30.587 [286/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.587 [287/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.847 [288/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:30.847 [289/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:30.847 [290/710] Linking static target lib/librte_reorder.a 00:01:30.847 [291/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:30.847 [292/710] Linking static target lib/acl/libavx512_tmp.a 00:01:30.847 [293/710] Linking static target lib/librte_acl.a 00:01:30.847 [294/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.847 [295/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.847 [296/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:31.113 [297/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:31.113 [298/710] Linking static target lib/librte_power.a 00:01:31.113 [299/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:31.113 [300/710] Linking static target lib/librte_security.a 00:01:31.113 [301/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:31.113 [302/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:31.374 [303/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:31.374 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:31.374 [305/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.374 [306/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:31.374 [307/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:31.374 [308/710] Linking static target lib/librte_mbuf.a 00:01:31.374 [309/710] Linking static target lib/librte_rib.a 00:01:31.374 [310/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.374 [311/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:31.374 [312/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.374 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:31.638 [314/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:31.638 [315/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:31.638 [316/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:31.638 [317/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:31.638 [318/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:31.638 [319/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:31.638 [320/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:31.907 [321/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:31.907 [322/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:31.907 [323/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.907 [324/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:31.907 [325/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:31.907 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.907 [327/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.165 [328/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:32.165 [329/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.165 [330/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:32.165 [331/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:32.165 [332/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.427 [333/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:32.427 [334/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:32.690 [335/710] Linking static target lib/librte_eventdev.a 00:01:32.690 [336/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:32.690 [337/710] Linking static target lib/librte_member.a 00:01:32.690 [338/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:32.690 [339/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:32.951 [340/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:32.951 [341/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:32.951 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:32.951 [343/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:32.951 [344/710] Linking static target lib/librte_cryptodev.a 00:01:32.951 [345/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:32.951 [346/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:32.951 [347/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:32.951 [348/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:32.951 [349/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:32.951 [350/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:33.213 [351/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:33.213 [352/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:33.213 [353/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:33.213 [354/710] Linking static target lib/librte_sched.a 00:01:33.213 [355/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:33.213 [356/710] Linking static target lib/librte_ethdev.a 00:01:33.213 [357/710] Linking static target lib/librte_fib.a 00:01:33.213 [358/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:33.213 [359/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.213 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:33.213 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:33.475 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:33.475 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:33.475 [364/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:33.475 [365/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:33.475 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:33.475 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:33.737 [368/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:33.737 [369/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:33.737 [370/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.737 [371/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.737 [372/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:33.737 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:33.999 [374/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:33.999 [375/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:33.999 [376/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:33.999 [377/710] Linking static target lib/librte_pdump.a 00:01:34.260 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:34.260 [379/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:34.260 [380/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:34.260 [381/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:34.260 [382/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:34.260 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:34.260 [384/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:34.260 [385/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:34.260 [386/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:34.522 [387/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:34.522 [388/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:34.522 [389/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:34.522 [390/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:34.522 [391/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.522 [392/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:34.522 [393/710] Linking static target lib/librte_ipsec.a 00:01:34.522 [394/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:34.790 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:34.790 [396/710] Linking static target lib/librte_table.a 00:01:34.790 [397/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.790 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:34.790 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:35.069 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:35.069 [401/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.331 [402/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:35.331 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:35.598 [404/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:35.598 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:35.598 [406/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:35.598 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:35.598 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:35.598 [409/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:35.598 [410/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:35.859 [411/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:35.859 [412/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:35.859 [413/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:35.859 [414/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:35.859 [415/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.859 [416/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.859 [417/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.126 [418/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:36.126 [419/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:36.126 [420/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:36.126 [421/710] Linking target lib/librte_eal.so.24.0 00:01:36.126 [422/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:36.126 [423/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:36.126 [424/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.126 [425/710] Linking static target drivers/librte_bus_vdev.a 00:01:36.126 [426/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.385 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:36.385 [428/710] Linking static target lib/librte_port.a 00:01:36.385 [429/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:36.385 [430/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:36.385 [431/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:36.385 [432/710] Linking target lib/librte_ring.so.24.0 00:01:36.385 [433/710] Linking target lib/librte_pci.so.24.0 00:01:36.385 [434/710] Linking target lib/librte_meter.so.24.0 00:01:36.648 [435/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:36.648 [436/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.648 [437/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:36.648 [438/710] Linking target lib/librte_timer.so.24.0 00:01:36.648 [439/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:36.648 [440/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:36.648 [441/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:36.648 [442/710] Linking target lib/librte_acl.so.24.0 00:01:36.648 [443/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:36.648 [444/710] Linking target lib/librte_cfgfile.so.24.0 00:01:36.912 [445/710] Linking target lib/librte_dmadev.so.24.0 00:01:36.912 [446/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:36.912 [447/710] Linking target lib/librte_jobstats.so.24.0 00:01:36.912 [448/710] Linking static target lib/librte_graph.a 00:01:36.912 [449/710] Linking target lib/librte_rcu.so.24.0 00:01:36.912 [450/710] Linking target lib/librte_rawdev.so.24.0 00:01:36.913 [451/710] Linking target lib/librte_mempool.so.24.0 00:01:36.913 [452/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.913 [453/710] Linking target lib/librte_stack.so.24.0 00:01:36.913 [454/710] Linking static target drivers/librte_bus_pci.a 00:01:36.913 [455/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.913 [456/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:36.913 [457/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:36.913 [458/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:36.913 [459/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:36.913 [460/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:36.913 [461/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:36.913 [462/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:37.175 [463/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:37.175 [464/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:37.175 [465/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:37.175 [466/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:37.175 [467/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:37.175 [468/710] Linking target lib/librte_mbuf.so.24.0 00:01:37.175 [469/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.175 [470/710] Linking target lib/librte_rib.so.24.0 00:01:37.175 [471/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:37.438 [472/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:37.438 [473/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:37.438 [474/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:37.438 [475/710] Linking static target drivers/librte_mempool_ring.a 00:01:37.438 [476/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:37.438 [477/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:37.438 [478/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:37.438 [479/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:37.438 [480/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:37.438 [481/710] Linking target lib/librte_net.so.24.0 00:01:37.438 [482/710] Linking target lib/librte_bbdev.so.24.0 00:01:37.438 [483/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:37.438 [484/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:37.438 [485/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:37.438 [486/710] Linking target lib/librte_compressdev.so.24.0 00:01:37.438 [487/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:37.438 [488/710] Linking target lib/librte_cryptodev.so.24.0 00:01:37.706 [489/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:37.706 [490/710] Linking target lib/librte_gpudev.so.24.0 00:01:37.706 [491/710] Linking target lib/librte_distributor.so.24.0 00:01:37.706 [492/710] Linking target lib/librte_regexdev.so.24.0 00:01:37.706 [493/710] Linking target lib/librte_mldev.so.24.0 00:01:37.706 [494/710] Linking target lib/librte_reorder.so.24.0 00:01:37.706 [495/710] Linking target lib/librte_sched.so.24.0 00:01:37.706 [496/710] Linking target lib/librte_fib.so.24.0 00:01:37.706 [497/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.706 [498/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:37.706 [499/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:37.706 [500/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:37.706 [501/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:37.706 [502/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:37.706 [503/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:37.706 [504/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.706 [505/710] Linking target lib/librte_cmdline.so.24.0 00:01:37.706 [506/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:37.706 [507/710] Linking target lib/librte_hash.so.24.0 00:01:37.967 [508/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:37.967 [509/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:37.967 [510/710] Linking target lib/librte_security.so.24.0 00:01:37.967 [511/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:37.967 [512/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:37.967 [513/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:38.232 [514/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:38.232 [515/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:38.232 [516/710] Linking target lib/librte_efd.so.24.0 00:01:38.232 [517/710] Linking target lib/librte_member.so.24.0 00:01:38.232 [518/710] Linking target lib/librte_lpm.so.24.0 00:01:38.232 [519/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:38.232 [520/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:38.232 [521/710] Linking target lib/librte_ipsec.so.24.0 00:01:38.232 [522/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:38.232 [523/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:38.496 [524/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:38.496 [525/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:38.496 [526/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:38.496 [527/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:38.758 [528/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:38.758 [529/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:38.758 [530/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:38.758 [531/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:38.758 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:39.040 [533/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:39.040 [534/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:39.040 [535/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:39.300 [536/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:39.300 [537/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:39.300 [538/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:39.300 [539/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:39.300 [540/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:39.300 [541/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:39.565 [542/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:39.565 [543/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:39.829 [544/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:39.829 [545/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:39.829 [546/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:39.829 [547/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:39.829 [548/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:39.829 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:39.829 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:39.829 [551/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:39.829 [552/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:39.829 [553/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:40.115 [554/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:40.115 [555/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:40.384 [556/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:40.384 [557/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:40.384 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:40.384 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:40.646 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:40.911 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:40.911 [562/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:41.174 [563/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:41.174 [564/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:41.174 [565/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:41.174 [566/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:41.174 [567/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:41.174 [568/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:41.174 [569/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:41.439 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:41.439 [571/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.439 [572/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:41.439 [573/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:41.439 [574/710] Linking target lib/librte_ethdev.so.24.0 00:01:41.700 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:41.700 [576/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:41.700 [577/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:41.700 [578/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:41.700 [579/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:41.961 [580/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:41.961 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:41.961 [582/710] Linking target lib/librte_metrics.so.24.0 00:01:41.961 [583/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:41.961 [584/710] Linking target lib/librte_bpf.so.24.0 00:01:41.961 [585/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:41.961 [586/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:41.961 [587/710] Linking target lib/librte_eventdev.so.24.0 00:01:41.961 [588/710] Linking target lib/librte_gro.so.24.0 00:01:41.961 [589/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:41.961 [590/710] Linking target lib/librte_gso.so.24.0 00:01:41.961 [591/710] Linking target lib/librte_ip_frag.so.24.0 00:01:42.223 [592/710] Linking static target lib/librte_pdcp.a 00:01:42.223 [593/710] Linking target lib/librte_pcapng.so.24.0 00:01:42.223 [594/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:42.223 [595/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:42.223 [596/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:42.223 [597/710] Linking target lib/librte_power.so.24.0 00:01:42.223 [598/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:42.223 [599/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:42.223 [600/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:42.223 [601/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:42.223 [602/710] Linking target lib/librte_bitratestats.so.24.0 00:01:42.223 [603/710] Linking target lib/librte_latencystats.so.24.0 00:01:42.223 [604/710] Linking target lib/librte_dispatcher.so.24.0 00:01:42.223 [605/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:42.223 [606/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:42.490 [607/710] Linking target lib/librte_pdump.so.24.0 00:01:42.490 [608/710] Linking target lib/librte_port.so.24.0 00:01:42.490 [609/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:42.490 [610/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:42.490 [611/710] Linking target lib/librte_graph.so.24.0 00:01:42.490 [612/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:42.490 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:42.490 [614/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:42.749 [615/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:42.749 [616/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.749 [617/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:42.749 [618/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:42.749 [619/710] Linking target lib/librte_pdcp.so.24.0 00:01:42.749 [620/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:42.749 [621/710] Linking target lib/librte_table.so.24.0 00:01:42.749 [622/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:43.015 [623/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:43.015 [624/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:43.016 [625/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:43.016 [626/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:43.016 [627/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:43.016 [628/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:43.016 [629/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:43.586 [630/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:43.586 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:43.586 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:43.586 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:43.586 [634/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:43.844 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:43.844 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:43.844 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:43.844 [638/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:43.844 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:43.844 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:43.844 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:44.103 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:44.361 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:44.361 [644/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:44.361 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:44.361 [646/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:44.361 [647/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:44.361 [648/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:44.361 [649/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:44.619 [650/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:44.619 [651/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:44.619 [652/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:44.619 [653/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:44.619 [654/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:44.619 [655/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:44.878 [656/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:44.878 [657/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:44.878 [658/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:44.878 [659/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:45.135 [660/710] Linking static target drivers/librte_net_i40e.a 00:01:45.135 [661/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:45.135 [662/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:45.394 [663/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:45.652 [664/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:45.652 [665/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:45.652 [666/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.652 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:45.652 [668/710] Linking target drivers/librte_net_i40e.so.24.0 00:01:45.652 [669/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:46.227 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:46.485 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:46.742 [672/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:46.742 [673/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:46.742 [674/710] Linking static target lib/librte_node.a 00:01:47.000 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.258 [676/710] Linking target lib/librte_node.so.24.0 00:01:47.824 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:48.082 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:48.341 [679/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:49.715 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:51.088 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:56.352 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:35.119 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:35.119 [684/710] Linking static target lib/librte_vhost.a 00:02:35.119 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.119 [686/710] Linking target lib/librte_vhost.so.24.0 00:02:37.649 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:37.649 [688/710] Linking static target lib/librte_pipeline.a 00:02:38.217 [689/710] Linking target app/dpdk-test-acl 00:02:38.217 [690/710] Linking target app/dpdk-dumpcap 00:02:38.217 [691/710] Linking target app/dpdk-test-sad 00:02:38.217 [692/710] Linking target app/dpdk-test-cmdline 00:02:38.217 [693/710] Linking target app/dpdk-test-flow-perf 00:02:38.217 [694/710] Linking target app/dpdk-test-mldev 00:02:38.217 [695/710] Linking target app/dpdk-test-security-perf 00:02:38.217 [696/710] Linking target app/dpdk-test-fib 00:02:38.217 [697/710] Linking target app/dpdk-test-gpudev 00:02:38.217 [698/710] Linking target app/dpdk-test-regex 00:02:38.217 [699/710] Linking target app/dpdk-proc-info 00:02:38.217 [700/710] Linking target app/dpdk-test-pipeline 00:02:38.217 [701/710] Linking target app/dpdk-pdump 00:02:38.217 [702/710] Linking target app/dpdk-test-compress-perf 00:02:38.218 [703/710] Linking target app/dpdk-test-crypto-perf 00:02:38.218 [704/710] Linking target app/dpdk-test-dma-perf 00:02:38.218 [705/710] Linking target app/dpdk-test-bbdev 00:02:38.218 [706/710] Linking target app/dpdk-graph 00:02:38.218 [707/710] Linking target app/dpdk-test-eventdev 00:02:38.218 [708/710] Linking target app/dpdk-testpmd 00:02:40.754 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.754 [710/710] Linking target lib/librte_pipeline.so.24.0 00:02:40.754 01:38:22 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:40.755 01:38:22 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:40.755 01:38:22 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:40.755 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:40.755 [0/1] Installing files. 00:02:40.755 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:40.755 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:40.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:40.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:40.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:40.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:40.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:40.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:40.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:40.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:40.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:40.761 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:40.761 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.328 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.328 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.328 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.328 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.328 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.328 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.328 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.328 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.328 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.328 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.328 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:41.329 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:41.329 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:41.329 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.329 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:41.329 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.329 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.330 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:41.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:41.595 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:41.595 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:41.595 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:41.595 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:41.595 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:41.595 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:41.595 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:41.595 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:41.595 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:41.595 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:41.595 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:41.595 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:41.595 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:41.595 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:41.595 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:41.595 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:41.595 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:41.595 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:41.595 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:41.595 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:41.595 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:41.595 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:41.595 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:41.595 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:41.595 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:41.595 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:41.595 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:41.595 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:41.595 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:41.595 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:41.595 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:41.595 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:41.595 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:41.595 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:41.595 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:41.595 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:41.595 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:41.595 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:41.595 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:41.595 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:41.595 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:41.595 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:41.595 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:41.595 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:41.595 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:41.595 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:41.595 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:41.595 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:41.595 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:41.595 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:41.595 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:41.595 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:41.595 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:41.595 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:41.595 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:41.595 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:41.595 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:41.595 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:41.595 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:41.595 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:41.595 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:41.595 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:41.595 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:41.595 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:41.595 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:41.595 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:41.595 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:41.595 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:41.595 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:41.595 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:41.595 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:41.595 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:41.595 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:41.595 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:41.595 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:41.595 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:41.595 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:41.595 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:41.595 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:41.595 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:41.595 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:41.595 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:41.595 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:41.595 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:41.595 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:41.596 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:41.596 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:41.596 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:41.596 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:41.596 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:41.596 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:41.596 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:41.596 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:41.596 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:41.596 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:41.596 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:41.596 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:41.596 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:41.596 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:41.596 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:41.596 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:41.596 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:41.596 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:41.596 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:41.596 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:41.596 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:41.596 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:41.596 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:41.596 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:41.596 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:41.596 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:41.596 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:41.596 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:41.596 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:41.596 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:41.596 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:41.596 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:41.596 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:41.596 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:41.596 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:41.596 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:41.596 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:41.596 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:41.596 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:41.596 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:41.596 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:41.596 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:41.596 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:41.596 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:41.596 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:41.596 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:41.596 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:41.596 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:41.596 01:38:23 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:02:41.596 01:38:23 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.596 00:02:41.596 real 1m24.965s 00:02:41.596 user 18m3.017s 00:02:41.596 sys 2m5.807s 00:02:41.596 01:38:23 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:41.596 01:38:23 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:41.596 ************************************ 00:02:41.596 END TEST build_native_dpdk 00:02:41.596 ************************************ 00:02:41.596 01:38:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:41.596 01:38:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:41.596 01:38:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:41.596 01:38:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:41.596 01:38:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:41.596 01:38:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:41.596 01:38:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:41.596 01:38:23 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:41.596 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:41.854 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.854 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.854 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:42.112 Using 'verbs' RDMA provider 00:02:52.654 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:02.637 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:02.637 Creating mk/config.mk...done. 00:03:02.637 Creating mk/cc.flags.mk...done. 00:03:02.637 Type 'make' to build. 00:03:02.637 01:38:43 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:02.637 01:38:43 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:02.637 01:38:43 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:02.637 01:38:43 -- common/autotest_common.sh@10 -- $ set +x 00:03:02.637 ************************************ 00:03:02.637 START TEST make 00:03:02.637 ************************************ 00:03:02.637 01:38:43 make -- common/autotest_common.sh@1125 -- $ make -j48 00:03:02.637 make[1]: Nothing to be done for 'all'. 00:03:03.240 The Meson build system 00:03:03.240 Version: 1.3.1 00:03:03.240 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:03.240 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:03.240 Build type: native build 00:03:03.240 Project name: libvfio-user 00:03:03.240 Project version: 0.0.1 00:03:03.240 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:03.240 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:03.240 Host machine cpu family: x86_64 00:03:03.240 Host machine cpu: x86_64 00:03:03.240 Run-time dependency threads found: YES 00:03:03.240 Library dl found: YES 00:03:03.240 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:03.240 Run-time dependency json-c found: YES 0.17 00:03:03.240 Run-time dependency cmocka found: YES 1.1.7 00:03:03.240 Program pytest-3 found: NO 00:03:03.240 Program flake8 found: NO 00:03:03.240 Program misspell-fixer found: NO 00:03:03.240 Program restructuredtext-lint found: NO 00:03:03.240 Program valgrind found: YES (/usr/bin/valgrind) 00:03:03.240 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:03.241 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:03.241 Compiler for C supports arguments -Wwrite-strings: YES 00:03:03.241 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:03.241 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:03.241 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:03.241 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:03.241 Build targets in project: 8 00:03:03.241 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:03.241 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:03.241 00:03:03.241 libvfio-user 0.0.1 00:03:03.241 00:03:03.241 User defined options 00:03:03.241 buildtype : debug 00:03:03.241 default_library: shared 00:03:03.241 libdir : /usr/local/lib 00:03:03.241 00:03:03.241 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:03.822 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:04.081 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:04.081 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:04.081 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:04.081 [4/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:04.081 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:04.081 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:04.081 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:04.081 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:04.081 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:04.081 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:04.081 [11/37] Compiling C object samples/null.p/null.c.o 00:03:04.081 [12/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:04.081 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:04.081 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:04.081 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:04.081 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:04.347 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:04.347 [18/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:04.347 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:04.347 [20/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:04.347 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:04.347 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:04.347 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:04.347 [24/37] Compiling C object samples/server.p/server.c.o 00:03:04.347 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:04.347 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:04.347 [27/37] Compiling C object samples/client.p/client.c.o 00:03:04.347 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:03:04.347 [29/37] Linking target samples/client 00:03:04.347 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:04.347 [31/37] Linking target test/unit_tests 00:03:04.609 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:04.609 [33/37] Linking target samples/lspci 00:03:04.609 [34/37] Linking target samples/server 00:03:04.609 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:04.609 [36/37] Linking target samples/gpio-pci-idio-16 00:03:04.609 [37/37] Linking target samples/null 00:03:04.609 INFO: autodetecting backend as ninja 00:03:04.609 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:04.872 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:05.448 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:05.448 ninja: no work to do. 00:03:17.652 CC lib/ut_mock/mock.o 00:03:17.652 CC lib/ut/ut.o 00:03:17.652 CC lib/log/log.o 00:03:17.652 CC lib/log/log_flags.o 00:03:17.652 CC lib/log/log_deprecated.o 00:03:17.652 LIB libspdk_log.a 00:03:17.652 LIB libspdk_ut.a 00:03:17.652 LIB libspdk_ut_mock.a 00:03:17.652 SO libspdk_ut.so.2.0 00:03:17.652 SO libspdk_ut_mock.so.6.0 00:03:17.652 SO libspdk_log.so.7.0 00:03:17.652 SYMLINK libspdk_ut.so 00:03:17.652 SYMLINK libspdk_ut_mock.so 00:03:17.652 SYMLINK libspdk_log.so 00:03:17.652 CXX lib/trace_parser/trace.o 00:03:17.652 CC lib/dma/dma.o 00:03:17.652 CC lib/util/base64.o 00:03:17.652 CC lib/ioat/ioat.o 00:03:17.652 CC lib/util/bit_array.o 00:03:17.652 CC lib/util/cpuset.o 00:03:17.652 CC lib/util/crc16.o 00:03:17.652 CC lib/util/crc32.o 00:03:17.652 CC lib/util/crc32c.o 00:03:17.652 CC lib/util/crc32_ieee.o 00:03:17.652 CC lib/util/crc64.o 00:03:17.652 CC lib/util/dif.o 00:03:17.652 CC lib/util/fd.o 00:03:17.652 CC lib/util/fd_group.o 00:03:17.652 CC lib/util/file.o 00:03:17.652 CC lib/util/hexlify.o 00:03:17.652 CC lib/util/iov.o 00:03:17.652 CC lib/util/math.o 00:03:17.652 CC lib/util/net.o 00:03:17.652 CC lib/util/pipe.o 00:03:17.652 CC lib/util/strerror_tls.o 00:03:17.652 CC lib/util/string.o 00:03:17.652 CC lib/util/uuid.o 00:03:17.652 CC lib/util/xor.o 00:03:17.652 CC lib/util/zipf.o 00:03:17.652 CC lib/vfio_user/host/vfio_user_pci.o 00:03:17.652 CC lib/vfio_user/host/vfio_user.o 00:03:17.652 LIB libspdk_dma.a 00:03:17.652 SO libspdk_dma.so.4.0 00:03:17.652 SYMLINK libspdk_dma.so 00:03:17.652 LIB libspdk_ioat.a 00:03:17.652 SO libspdk_ioat.so.7.0 00:03:17.652 LIB libspdk_vfio_user.a 00:03:17.652 SYMLINK libspdk_ioat.so 00:03:17.652 SO libspdk_vfio_user.so.5.0 00:03:17.652 SYMLINK libspdk_vfio_user.so 00:03:17.910 LIB libspdk_util.a 00:03:17.910 SO libspdk_util.so.10.0 00:03:18.168 SYMLINK libspdk_util.so 00:03:18.168 CC lib/rdma_utils/rdma_utils.o 00:03:18.168 CC lib/conf/conf.o 00:03:18.168 CC lib/env_dpdk/env.o 00:03:18.168 CC lib/json/json_parse.o 00:03:18.168 CC lib/env_dpdk/memory.o 00:03:18.168 CC lib/vmd/vmd.o 00:03:18.168 CC lib/json/json_util.o 00:03:18.168 CC lib/env_dpdk/pci.o 00:03:18.168 CC lib/json/json_write.o 00:03:18.168 CC lib/vmd/led.o 00:03:18.168 CC lib/env_dpdk/init.o 00:03:18.168 CC lib/idxd/idxd.o 00:03:18.168 CC lib/env_dpdk/threads.o 00:03:18.168 CC lib/idxd/idxd_user.o 00:03:18.168 CC lib/env_dpdk/pci_ioat.o 00:03:18.168 CC lib/rdma_provider/common.o 00:03:18.168 CC lib/env_dpdk/pci_virtio.o 00:03:18.168 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:18.168 CC lib/env_dpdk/pci_vmd.o 00:03:18.168 CC lib/idxd/idxd_kernel.o 00:03:18.168 CC lib/env_dpdk/pci_idxd.o 00:03:18.168 CC lib/env_dpdk/pci_event.o 00:03:18.168 CC lib/env_dpdk/pci_dpdk.o 00:03:18.168 CC lib/env_dpdk/sigbus_handler.o 00:03:18.168 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:18.168 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:18.168 LIB libspdk_trace_parser.a 00:03:18.427 SO libspdk_trace_parser.so.5.0 00:03:18.427 SYMLINK libspdk_trace_parser.so 00:03:18.427 LIB libspdk_rdma_provider.a 00:03:18.427 LIB libspdk_conf.a 00:03:18.427 SO libspdk_rdma_provider.so.6.0 00:03:18.427 SO libspdk_conf.so.6.0 00:03:18.427 LIB libspdk_rdma_utils.a 00:03:18.427 SYMLINK libspdk_rdma_provider.so 00:03:18.685 SYMLINK libspdk_conf.so 00:03:18.685 SO libspdk_rdma_utils.so.1.0 00:03:18.685 LIB libspdk_json.a 00:03:18.685 SYMLINK libspdk_rdma_utils.so 00:03:18.685 SO libspdk_json.so.6.0 00:03:18.685 SYMLINK libspdk_json.so 00:03:18.943 CC lib/jsonrpc/jsonrpc_server.o 00:03:18.943 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:18.943 CC lib/jsonrpc/jsonrpc_client.o 00:03:18.943 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:18.943 LIB libspdk_idxd.a 00:03:18.943 LIB libspdk_vmd.a 00:03:18.943 SO libspdk_idxd.so.12.0 00:03:18.943 SO libspdk_vmd.so.6.0 00:03:18.943 SYMLINK libspdk_idxd.so 00:03:18.943 SYMLINK libspdk_vmd.so 00:03:19.200 LIB libspdk_jsonrpc.a 00:03:19.200 SO libspdk_jsonrpc.so.6.0 00:03:19.200 SYMLINK libspdk_jsonrpc.so 00:03:19.457 CC lib/rpc/rpc.o 00:03:19.457 LIB libspdk_rpc.a 00:03:19.715 SO libspdk_rpc.so.6.0 00:03:19.715 SYMLINK libspdk_rpc.so 00:03:19.715 CC lib/keyring/keyring.o 00:03:19.715 CC lib/keyring/keyring_rpc.o 00:03:19.715 CC lib/trace/trace.o 00:03:19.715 CC lib/notify/notify.o 00:03:19.715 CC lib/trace/trace_flags.o 00:03:19.715 CC lib/notify/notify_rpc.o 00:03:19.715 CC lib/trace/trace_rpc.o 00:03:19.972 LIB libspdk_notify.a 00:03:19.972 SO libspdk_notify.so.6.0 00:03:19.972 LIB libspdk_keyring.a 00:03:19.972 SYMLINK libspdk_notify.so 00:03:19.972 LIB libspdk_trace.a 00:03:19.972 SO libspdk_keyring.so.1.0 00:03:20.230 SO libspdk_trace.so.10.0 00:03:20.230 SYMLINK libspdk_keyring.so 00:03:20.230 SYMLINK libspdk_trace.so 00:03:20.230 LIB libspdk_env_dpdk.a 00:03:20.230 CC lib/sock/sock.o 00:03:20.230 CC lib/thread/thread.o 00:03:20.230 CC lib/sock/sock_rpc.o 00:03:20.230 CC lib/thread/iobuf.o 00:03:20.230 SO libspdk_env_dpdk.so.15.0 00:03:20.488 SYMLINK libspdk_env_dpdk.so 00:03:20.746 LIB libspdk_sock.a 00:03:20.746 SO libspdk_sock.so.10.0 00:03:20.746 SYMLINK libspdk_sock.so 00:03:21.004 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:21.004 CC lib/nvme/nvme_ctrlr.o 00:03:21.004 CC lib/nvme/nvme_fabric.o 00:03:21.004 CC lib/nvme/nvme_ns_cmd.o 00:03:21.004 CC lib/nvme/nvme_ns.o 00:03:21.004 CC lib/nvme/nvme_pcie_common.o 00:03:21.004 CC lib/nvme/nvme_pcie.o 00:03:21.004 CC lib/nvme/nvme_qpair.o 00:03:21.004 CC lib/nvme/nvme.o 00:03:21.004 CC lib/nvme/nvme_quirks.o 00:03:21.004 CC lib/nvme/nvme_transport.o 00:03:21.004 CC lib/nvme/nvme_discovery.o 00:03:21.004 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:21.004 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:21.004 CC lib/nvme/nvme_tcp.o 00:03:21.004 CC lib/nvme/nvme_opal.o 00:03:21.004 CC lib/nvme/nvme_io_msg.o 00:03:21.004 CC lib/nvme/nvme_poll_group.o 00:03:21.004 CC lib/nvme/nvme_zns.o 00:03:21.004 CC lib/nvme/nvme_stubs.o 00:03:21.004 CC lib/nvme/nvme_auth.o 00:03:21.004 CC lib/nvme/nvme_cuse.o 00:03:21.004 CC lib/nvme/nvme_vfio_user.o 00:03:21.004 CC lib/nvme/nvme_rdma.o 00:03:21.936 LIB libspdk_thread.a 00:03:21.936 SO libspdk_thread.so.10.1 00:03:21.936 SYMLINK libspdk_thread.so 00:03:22.194 CC lib/vfu_tgt/tgt_endpoint.o 00:03:22.194 CC lib/blob/blobstore.o 00:03:22.194 CC lib/vfu_tgt/tgt_rpc.o 00:03:22.194 CC lib/blob/request.o 00:03:22.194 CC lib/init/json_config.o 00:03:22.194 CC lib/init/subsystem.o 00:03:22.194 CC lib/blob/zeroes.o 00:03:22.194 CC lib/init/subsystem_rpc.o 00:03:22.194 CC lib/blob/blob_bs_dev.o 00:03:22.194 CC lib/init/rpc.o 00:03:22.194 CC lib/virtio/virtio.o 00:03:22.194 CC lib/accel/accel.o 00:03:22.194 CC lib/virtio/virtio_vhost_user.o 00:03:22.194 CC lib/accel/accel_rpc.o 00:03:22.194 CC lib/virtio/virtio_vfio_user.o 00:03:22.194 CC lib/accel/accel_sw.o 00:03:22.194 CC lib/virtio/virtio_pci.o 00:03:22.453 LIB libspdk_init.a 00:03:22.453 SO libspdk_init.so.5.0 00:03:22.453 LIB libspdk_vfu_tgt.a 00:03:22.453 LIB libspdk_virtio.a 00:03:22.453 SYMLINK libspdk_init.so 00:03:22.453 SO libspdk_vfu_tgt.so.3.0 00:03:22.453 SO libspdk_virtio.so.7.0 00:03:22.711 SYMLINK libspdk_vfu_tgt.so 00:03:22.711 SYMLINK libspdk_virtio.so 00:03:22.711 CC lib/event/app.o 00:03:22.711 CC lib/event/reactor.o 00:03:22.711 CC lib/event/log_rpc.o 00:03:22.711 CC lib/event/app_rpc.o 00:03:22.711 CC lib/event/scheduler_static.o 00:03:23.278 LIB libspdk_event.a 00:03:23.278 SO libspdk_event.so.14.0 00:03:23.278 LIB libspdk_accel.a 00:03:23.278 SO libspdk_accel.so.16.0 00:03:23.278 SYMLINK libspdk_event.so 00:03:23.278 SYMLINK libspdk_accel.so 00:03:23.278 LIB libspdk_nvme.a 00:03:23.536 CC lib/bdev/bdev.o 00:03:23.536 CC lib/bdev/bdev_rpc.o 00:03:23.536 CC lib/bdev/bdev_zone.o 00:03:23.536 CC lib/bdev/part.o 00:03:23.536 CC lib/bdev/scsi_nvme.o 00:03:23.536 SO libspdk_nvme.so.13.1 00:03:23.794 SYMLINK libspdk_nvme.so 00:03:25.169 LIB libspdk_blob.a 00:03:25.169 SO libspdk_blob.so.11.0 00:03:25.169 SYMLINK libspdk_blob.so 00:03:25.427 CC lib/lvol/lvol.o 00:03:25.427 CC lib/blobfs/blobfs.o 00:03:25.427 CC lib/blobfs/tree.o 00:03:25.994 LIB libspdk_bdev.a 00:03:26.252 SO libspdk_bdev.so.16.0 00:03:26.252 LIB libspdk_blobfs.a 00:03:26.252 SYMLINK libspdk_bdev.so 00:03:26.252 SO libspdk_blobfs.so.10.0 00:03:26.252 SYMLINK libspdk_blobfs.so 00:03:26.252 LIB libspdk_lvol.a 00:03:26.252 SO libspdk_lvol.so.10.0 00:03:26.519 CC lib/ublk/ublk.o 00:03:26.519 CC lib/scsi/dev.o 00:03:26.519 CC lib/nbd/nbd.o 00:03:26.519 CC lib/scsi/lun.o 00:03:26.519 CC lib/ublk/ublk_rpc.o 00:03:26.519 CC lib/nbd/nbd_rpc.o 00:03:26.519 CC lib/ftl/ftl_core.o 00:03:26.519 CC lib/nvmf/ctrlr.o 00:03:26.519 CC lib/scsi/port.o 00:03:26.519 CC lib/ftl/ftl_init.o 00:03:26.519 CC lib/nvmf/ctrlr_discovery.o 00:03:26.519 CC lib/ftl/ftl_layout.o 00:03:26.519 CC lib/nvmf/ctrlr_bdev.o 00:03:26.519 CC lib/scsi/scsi.o 00:03:26.519 CC lib/scsi/scsi_bdev.o 00:03:26.519 CC lib/ftl/ftl_debug.o 00:03:26.519 CC lib/scsi/scsi_pr.o 00:03:26.519 CC lib/nvmf/subsystem.o 00:03:26.519 CC lib/ftl/ftl_io.o 00:03:26.519 CC lib/nvmf/nvmf.o 00:03:26.519 CC lib/scsi/task.o 00:03:26.519 CC lib/ftl/ftl_sb.o 00:03:26.519 CC lib/scsi/scsi_rpc.o 00:03:26.519 CC lib/nvmf/nvmf_rpc.o 00:03:26.519 CC lib/ftl/ftl_l2p.o 00:03:26.519 CC lib/ftl/ftl_l2p_flat.o 00:03:26.519 CC lib/nvmf/transport.o 00:03:26.519 CC lib/ftl/ftl_nv_cache.o 00:03:26.519 CC lib/nvmf/tcp.o 00:03:26.519 CC lib/ftl/ftl_band.o 00:03:26.519 CC lib/nvmf/stubs.o 00:03:26.519 CC lib/ftl/ftl_band_ops.o 00:03:26.519 CC lib/nvmf/mdns_server.o 00:03:26.519 CC lib/ftl/ftl_writer.o 00:03:26.519 CC lib/ftl/ftl_rq.o 00:03:26.519 CC lib/nvmf/vfio_user.o 00:03:26.519 CC lib/nvmf/rdma.o 00:03:26.519 CC lib/ftl/ftl_reloc.o 00:03:26.519 CC lib/nvmf/auth.o 00:03:26.519 CC lib/ftl/ftl_l2p_cache.o 00:03:26.519 CC lib/ftl/ftl_p2l.o 00:03:26.519 CC lib/ftl/mngt/ftl_mngt.o 00:03:26.519 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:26.519 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:26.519 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:26.519 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:26.519 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:26.519 SYMLINK libspdk_lvol.so 00:03:26.519 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:26.782 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:26.782 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:26.782 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:26.782 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:26.782 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:26.782 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:26.782 CC lib/ftl/utils/ftl_conf.o 00:03:26.782 CC lib/ftl/utils/ftl_md.o 00:03:26.782 CC lib/ftl/utils/ftl_mempool.o 00:03:26.782 CC lib/ftl/utils/ftl_bitmap.o 00:03:26.782 CC lib/ftl/utils/ftl_property.o 00:03:26.782 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:26.782 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:26.782 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:27.045 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:27.045 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:27.045 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:27.045 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:27.045 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:27.045 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:27.045 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:27.045 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:27.045 CC lib/ftl/base/ftl_base_dev.o 00:03:27.045 CC lib/ftl/base/ftl_base_bdev.o 00:03:27.045 CC lib/ftl/ftl_trace.o 00:03:27.304 LIB libspdk_nbd.a 00:03:27.304 SO libspdk_nbd.so.7.0 00:03:27.304 LIB libspdk_scsi.a 00:03:27.304 SYMLINK libspdk_nbd.so 00:03:27.304 SO libspdk_scsi.so.9.0 00:03:27.562 LIB libspdk_ublk.a 00:03:27.562 SO libspdk_ublk.so.3.0 00:03:27.562 SYMLINK libspdk_scsi.so 00:03:27.562 SYMLINK libspdk_ublk.so 00:03:27.562 CC lib/iscsi/conn.o 00:03:27.562 CC lib/iscsi/init_grp.o 00:03:27.562 CC lib/iscsi/iscsi.o 00:03:27.562 CC lib/vhost/vhost.o 00:03:27.562 CC lib/iscsi/md5.o 00:03:27.562 CC lib/vhost/vhost_rpc.o 00:03:27.562 CC lib/iscsi/param.o 00:03:27.562 CC lib/vhost/vhost_scsi.o 00:03:27.562 CC lib/iscsi/portal_grp.o 00:03:27.562 CC lib/vhost/vhost_blk.o 00:03:27.562 CC lib/iscsi/tgt_node.o 00:03:27.562 CC lib/vhost/rte_vhost_user.o 00:03:27.562 CC lib/iscsi/iscsi_subsystem.o 00:03:27.562 CC lib/iscsi/iscsi_rpc.o 00:03:27.562 CC lib/iscsi/task.o 00:03:27.857 LIB libspdk_ftl.a 00:03:28.136 SO libspdk_ftl.so.9.0 00:03:28.394 SYMLINK libspdk_ftl.so 00:03:28.961 LIB libspdk_vhost.a 00:03:28.961 SO libspdk_vhost.so.8.0 00:03:28.961 SYMLINK libspdk_vhost.so 00:03:28.961 LIB libspdk_nvmf.a 00:03:28.961 LIB libspdk_iscsi.a 00:03:29.220 SO libspdk_nvmf.so.19.0 00:03:29.220 SO libspdk_iscsi.so.8.0 00:03:29.220 SYMLINK libspdk_iscsi.so 00:03:29.220 SYMLINK libspdk_nvmf.so 00:03:29.478 CC module/env_dpdk/env_dpdk_rpc.o 00:03:29.478 CC module/vfu_device/vfu_virtio.o 00:03:29.478 CC module/vfu_device/vfu_virtio_blk.o 00:03:29.478 CC module/vfu_device/vfu_virtio_scsi.o 00:03:29.478 CC module/vfu_device/vfu_virtio_rpc.o 00:03:29.736 CC module/sock/posix/posix.o 00:03:29.736 CC module/scheduler/gscheduler/gscheduler.o 00:03:29.736 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:29.736 CC module/keyring/linux/keyring.o 00:03:29.736 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:29.736 CC module/accel/error/accel_error.o 00:03:29.736 CC module/keyring/linux/keyring_rpc.o 00:03:29.736 CC module/accel/dsa/accel_dsa.o 00:03:29.736 CC module/keyring/file/keyring.o 00:03:29.736 CC module/accel/error/accel_error_rpc.o 00:03:29.736 CC module/accel/dsa/accel_dsa_rpc.o 00:03:29.736 CC module/keyring/file/keyring_rpc.o 00:03:29.736 CC module/blob/bdev/blob_bdev.o 00:03:29.736 CC module/accel/ioat/accel_ioat.o 00:03:29.736 CC module/accel/iaa/accel_iaa.o 00:03:29.736 CC module/accel/ioat/accel_ioat_rpc.o 00:03:29.737 CC module/accel/iaa/accel_iaa_rpc.o 00:03:29.737 LIB libspdk_env_dpdk_rpc.a 00:03:29.737 SO libspdk_env_dpdk_rpc.so.6.0 00:03:29.737 SYMLINK libspdk_env_dpdk_rpc.so 00:03:29.737 LIB libspdk_keyring_file.a 00:03:29.737 LIB libspdk_keyring_linux.a 00:03:29.737 LIB libspdk_scheduler_dpdk_governor.a 00:03:29.737 LIB libspdk_scheduler_gscheduler.a 00:03:29.995 SO libspdk_keyring_file.so.1.0 00:03:29.995 SO libspdk_keyring_linux.so.1.0 00:03:29.995 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:29.995 SO libspdk_scheduler_gscheduler.so.4.0 00:03:29.995 LIB libspdk_accel_error.a 00:03:29.995 LIB libspdk_accel_ioat.a 00:03:29.995 LIB libspdk_scheduler_dynamic.a 00:03:29.995 LIB libspdk_accel_iaa.a 00:03:29.995 SO libspdk_accel_error.so.2.0 00:03:29.995 SO libspdk_scheduler_dynamic.so.4.0 00:03:29.995 SO libspdk_accel_ioat.so.6.0 00:03:29.995 SYMLINK libspdk_keyring_file.so 00:03:29.995 SYMLINK libspdk_keyring_linux.so 00:03:29.995 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:29.995 SYMLINK libspdk_scheduler_gscheduler.so 00:03:29.995 SO libspdk_accel_iaa.so.3.0 00:03:29.995 LIB libspdk_accel_dsa.a 00:03:29.995 SYMLINK libspdk_scheduler_dynamic.so 00:03:29.995 SYMLINK libspdk_accel_error.so 00:03:29.995 SYMLINK libspdk_accel_ioat.so 00:03:29.995 LIB libspdk_blob_bdev.a 00:03:29.995 SO libspdk_accel_dsa.so.5.0 00:03:29.995 SYMLINK libspdk_accel_iaa.so 00:03:29.995 SO libspdk_blob_bdev.so.11.0 00:03:29.995 SYMLINK libspdk_accel_dsa.so 00:03:29.995 SYMLINK libspdk_blob_bdev.so 00:03:30.254 LIB libspdk_vfu_device.a 00:03:30.254 SO libspdk_vfu_device.so.3.0 00:03:30.254 CC module/blobfs/bdev/blobfs_bdev.o 00:03:30.254 CC module/bdev/null/bdev_null.o 00:03:30.254 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:30.254 CC module/bdev/delay/vbdev_delay.o 00:03:30.254 CC module/bdev/null/bdev_null_rpc.o 00:03:30.254 CC module/bdev/aio/bdev_aio.o 00:03:30.254 CC module/bdev/malloc/bdev_malloc.o 00:03:30.254 CC module/bdev/aio/bdev_aio_rpc.o 00:03:30.254 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:30.254 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:30.254 CC module/bdev/lvol/vbdev_lvol.o 00:03:30.254 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:30.254 CC module/bdev/ftl/bdev_ftl.o 00:03:30.254 CC module/bdev/split/vbdev_split.o 00:03:30.254 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:30.254 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:30.254 CC module/bdev/split/vbdev_split_rpc.o 00:03:30.254 CC module/bdev/passthru/vbdev_passthru.o 00:03:30.254 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:30.254 CC module/bdev/error/vbdev_error.o 00:03:30.254 CC module/bdev/gpt/gpt.o 00:03:30.254 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:30.254 CC module/bdev/raid/bdev_raid.o 00:03:30.254 CC module/bdev/nvme/bdev_nvme.o 00:03:30.254 CC module/bdev/raid/bdev_raid_rpc.o 00:03:30.254 CC module/bdev/gpt/vbdev_gpt.o 00:03:30.254 CC module/bdev/error/vbdev_error_rpc.o 00:03:30.254 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:30.254 CC module/bdev/raid/bdev_raid_sb.o 00:03:30.254 CC module/bdev/nvme/nvme_rpc.o 00:03:30.254 CC module/bdev/nvme/bdev_mdns_client.o 00:03:30.254 CC module/bdev/raid/raid0.o 00:03:30.254 CC module/bdev/nvme/vbdev_opal.o 00:03:30.254 CC module/bdev/raid/raid1.o 00:03:30.254 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:30.254 CC module/bdev/iscsi/bdev_iscsi.o 00:03:30.254 CC module/bdev/raid/concat.o 00:03:30.254 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:30.254 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:30.254 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:30.254 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:30.254 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:30.513 SYMLINK libspdk_vfu_device.so 00:03:30.513 LIB libspdk_sock_posix.a 00:03:30.513 SO libspdk_sock_posix.so.6.0 00:03:30.772 LIB libspdk_blobfs_bdev.a 00:03:30.772 LIB libspdk_bdev_error.a 00:03:30.772 SYMLINK libspdk_sock_posix.so 00:03:30.772 SO libspdk_blobfs_bdev.so.6.0 00:03:30.772 SO libspdk_bdev_error.so.6.0 00:03:30.772 LIB libspdk_bdev_split.a 00:03:30.772 SYMLINK libspdk_blobfs_bdev.so 00:03:30.772 LIB libspdk_bdev_delay.a 00:03:30.772 LIB libspdk_bdev_null.a 00:03:30.772 LIB libspdk_bdev_gpt.a 00:03:30.772 SO libspdk_bdev_split.so.6.0 00:03:30.772 SYMLINK libspdk_bdev_error.so 00:03:30.772 SO libspdk_bdev_delay.so.6.0 00:03:30.772 SO libspdk_bdev_gpt.so.6.0 00:03:30.772 SO libspdk_bdev_null.so.6.0 00:03:30.772 LIB libspdk_bdev_passthru.a 00:03:30.772 SYMLINK libspdk_bdev_split.so 00:03:30.772 SO libspdk_bdev_passthru.so.6.0 00:03:30.772 LIB libspdk_bdev_zone_block.a 00:03:30.772 LIB libspdk_bdev_ftl.a 00:03:30.772 SYMLINK libspdk_bdev_delay.so 00:03:30.772 LIB libspdk_bdev_malloc.a 00:03:30.772 LIB libspdk_bdev_aio.a 00:03:30.772 SYMLINK libspdk_bdev_gpt.so 00:03:30.772 SYMLINK libspdk_bdev_null.so 00:03:30.772 SO libspdk_bdev_zone_block.so.6.0 00:03:30.772 SO libspdk_bdev_ftl.so.6.0 00:03:30.772 LIB libspdk_bdev_iscsi.a 00:03:31.031 SO libspdk_bdev_malloc.so.6.0 00:03:31.031 SO libspdk_bdev_aio.so.6.0 00:03:31.031 SYMLINK libspdk_bdev_passthru.so 00:03:31.031 SO libspdk_bdev_iscsi.so.6.0 00:03:31.031 SYMLINK libspdk_bdev_zone_block.so 00:03:31.031 SYMLINK libspdk_bdev_ftl.so 00:03:31.031 SYMLINK libspdk_bdev_malloc.so 00:03:31.031 SYMLINK libspdk_bdev_aio.so 00:03:31.031 SYMLINK libspdk_bdev_iscsi.so 00:03:31.031 LIB libspdk_bdev_virtio.a 00:03:31.031 LIB libspdk_bdev_lvol.a 00:03:31.031 SO libspdk_bdev_virtio.so.6.0 00:03:31.031 SO libspdk_bdev_lvol.so.6.0 00:03:31.031 SYMLINK libspdk_bdev_virtio.so 00:03:31.031 SYMLINK libspdk_bdev_lvol.so 00:03:31.596 LIB libspdk_bdev_raid.a 00:03:31.596 SO libspdk_bdev_raid.so.6.0 00:03:31.596 SYMLINK libspdk_bdev_raid.so 00:03:32.967 LIB libspdk_bdev_nvme.a 00:03:32.967 SO libspdk_bdev_nvme.so.7.0 00:03:32.967 SYMLINK libspdk_bdev_nvme.so 00:03:33.533 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:33.533 CC module/event/subsystems/vmd/vmd.o 00:03:33.533 CC module/event/subsystems/iobuf/iobuf.o 00:03:33.533 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:33.533 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:33.533 CC module/event/subsystems/keyring/keyring.o 00:03:33.533 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:33.533 CC module/event/subsystems/scheduler/scheduler.o 00:03:33.533 CC module/event/subsystems/sock/sock.o 00:03:33.533 LIB libspdk_event_keyring.a 00:03:33.533 LIB libspdk_event_vhost_blk.a 00:03:33.533 LIB libspdk_event_vfu_tgt.a 00:03:33.533 LIB libspdk_event_scheduler.a 00:03:33.533 LIB libspdk_event_vmd.a 00:03:33.533 LIB libspdk_event_sock.a 00:03:33.533 SO libspdk_event_keyring.so.1.0 00:03:33.533 SO libspdk_event_vhost_blk.so.3.0 00:03:33.533 LIB libspdk_event_iobuf.a 00:03:33.533 SO libspdk_event_vfu_tgt.so.3.0 00:03:33.533 SO libspdk_event_scheduler.so.4.0 00:03:33.533 SO libspdk_event_sock.so.5.0 00:03:33.533 SO libspdk_event_vmd.so.6.0 00:03:33.533 SO libspdk_event_iobuf.so.3.0 00:03:33.533 SYMLINK libspdk_event_keyring.so 00:03:33.533 SYMLINK libspdk_event_vhost_blk.so 00:03:33.533 SYMLINK libspdk_event_vfu_tgt.so 00:03:33.533 SYMLINK libspdk_event_scheduler.so 00:03:33.533 SYMLINK libspdk_event_sock.so 00:03:33.533 SYMLINK libspdk_event_vmd.so 00:03:33.533 SYMLINK libspdk_event_iobuf.so 00:03:33.791 CC module/event/subsystems/accel/accel.o 00:03:34.048 LIB libspdk_event_accel.a 00:03:34.048 SO libspdk_event_accel.so.6.0 00:03:34.048 SYMLINK libspdk_event_accel.so 00:03:34.306 CC module/event/subsystems/bdev/bdev.o 00:03:34.306 LIB libspdk_event_bdev.a 00:03:34.306 SO libspdk_event_bdev.so.6.0 00:03:34.306 SYMLINK libspdk_event_bdev.so 00:03:34.564 CC module/event/subsystems/scsi/scsi.o 00:03:34.564 CC module/event/subsystems/nbd/nbd.o 00:03:34.564 CC module/event/subsystems/ublk/ublk.o 00:03:34.564 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:34.564 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:34.822 LIB libspdk_event_nbd.a 00:03:34.822 LIB libspdk_event_ublk.a 00:03:34.822 LIB libspdk_event_scsi.a 00:03:34.822 SO libspdk_event_nbd.so.6.0 00:03:34.822 SO libspdk_event_ublk.so.3.0 00:03:34.822 SO libspdk_event_scsi.so.6.0 00:03:34.822 SYMLINK libspdk_event_nbd.so 00:03:34.822 SYMLINK libspdk_event_ublk.so 00:03:34.822 SYMLINK libspdk_event_scsi.so 00:03:34.822 LIB libspdk_event_nvmf.a 00:03:34.822 SO libspdk_event_nvmf.so.6.0 00:03:34.822 SYMLINK libspdk_event_nvmf.so 00:03:35.080 CC module/event/subsystems/iscsi/iscsi.o 00:03:35.080 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:35.080 LIB libspdk_event_vhost_scsi.a 00:03:35.080 LIB libspdk_event_iscsi.a 00:03:35.080 SO libspdk_event_vhost_scsi.so.3.0 00:03:35.080 SO libspdk_event_iscsi.so.6.0 00:03:35.337 SYMLINK libspdk_event_vhost_scsi.so 00:03:35.337 SYMLINK libspdk_event_iscsi.so 00:03:35.337 SO libspdk.so.6.0 00:03:35.337 SYMLINK libspdk.so 00:03:35.598 CXX app/trace/trace.o 00:03:35.599 CC app/trace_record/trace_record.o 00:03:35.599 CC app/spdk_lspci/spdk_lspci.o 00:03:35.599 CC test/rpc_client/rpc_client_test.o 00:03:35.599 CC app/spdk_top/spdk_top.o 00:03:35.599 CC app/spdk_nvme_perf/perf.o 00:03:35.599 CC app/spdk_nvme_discover/discovery_aer.o 00:03:35.599 CC app/spdk_nvme_identify/identify.o 00:03:35.599 TEST_HEADER include/spdk/accel.h 00:03:35.599 TEST_HEADER include/spdk/accel_module.h 00:03:35.599 TEST_HEADER include/spdk/assert.h 00:03:35.599 TEST_HEADER include/spdk/barrier.h 00:03:35.599 TEST_HEADER include/spdk/base64.h 00:03:35.599 TEST_HEADER include/spdk/bdev_module.h 00:03:35.599 TEST_HEADER include/spdk/bdev.h 00:03:35.599 TEST_HEADER include/spdk/bdev_zone.h 00:03:35.599 TEST_HEADER include/spdk/bit_array.h 00:03:35.599 TEST_HEADER include/spdk/bit_pool.h 00:03:35.599 TEST_HEADER include/spdk/blob_bdev.h 00:03:35.599 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:35.599 TEST_HEADER include/spdk/blobfs.h 00:03:35.599 TEST_HEADER include/spdk/blob.h 00:03:35.599 TEST_HEADER include/spdk/conf.h 00:03:35.599 TEST_HEADER include/spdk/config.h 00:03:35.599 TEST_HEADER include/spdk/cpuset.h 00:03:35.599 TEST_HEADER include/spdk/crc16.h 00:03:35.599 TEST_HEADER include/spdk/crc32.h 00:03:35.599 TEST_HEADER include/spdk/dif.h 00:03:35.599 TEST_HEADER include/spdk/crc64.h 00:03:35.599 TEST_HEADER include/spdk/dma.h 00:03:35.599 TEST_HEADER include/spdk/endian.h 00:03:35.599 TEST_HEADER include/spdk/env.h 00:03:35.599 TEST_HEADER include/spdk/env_dpdk.h 00:03:35.599 TEST_HEADER include/spdk/event.h 00:03:35.599 TEST_HEADER include/spdk/fd_group.h 00:03:35.599 TEST_HEADER include/spdk/fd.h 00:03:35.599 TEST_HEADER include/spdk/file.h 00:03:35.599 TEST_HEADER include/spdk/ftl.h 00:03:35.599 TEST_HEADER include/spdk/hexlify.h 00:03:35.599 TEST_HEADER include/spdk/gpt_spec.h 00:03:35.599 TEST_HEADER include/spdk/histogram_data.h 00:03:35.599 TEST_HEADER include/spdk/idxd.h 00:03:35.599 TEST_HEADER include/spdk/init.h 00:03:35.599 TEST_HEADER include/spdk/idxd_spec.h 00:03:35.599 TEST_HEADER include/spdk/ioat.h 00:03:35.599 TEST_HEADER include/spdk/ioat_spec.h 00:03:35.599 TEST_HEADER include/spdk/iscsi_spec.h 00:03:35.599 TEST_HEADER include/spdk/json.h 00:03:35.599 TEST_HEADER include/spdk/jsonrpc.h 00:03:35.599 TEST_HEADER include/spdk/keyring_module.h 00:03:35.599 TEST_HEADER include/spdk/keyring.h 00:03:35.599 TEST_HEADER include/spdk/likely.h 00:03:35.599 TEST_HEADER include/spdk/log.h 00:03:35.599 TEST_HEADER include/spdk/lvol.h 00:03:35.599 TEST_HEADER include/spdk/memory.h 00:03:35.599 TEST_HEADER include/spdk/mmio.h 00:03:35.599 TEST_HEADER include/spdk/nbd.h 00:03:35.599 TEST_HEADER include/spdk/notify.h 00:03:35.599 TEST_HEADER include/spdk/net.h 00:03:35.599 TEST_HEADER include/spdk/nvme.h 00:03:35.599 TEST_HEADER include/spdk/nvme_intel.h 00:03:35.599 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:35.599 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:35.599 TEST_HEADER include/spdk/nvme_spec.h 00:03:35.599 TEST_HEADER include/spdk/nvme_zns.h 00:03:35.599 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:35.599 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:35.599 TEST_HEADER include/spdk/nvmf.h 00:03:35.599 TEST_HEADER include/spdk/nvmf_spec.h 00:03:35.599 TEST_HEADER include/spdk/nvmf_transport.h 00:03:35.599 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:35.599 TEST_HEADER include/spdk/opal_spec.h 00:03:35.599 TEST_HEADER include/spdk/opal.h 00:03:35.599 TEST_HEADER include/spdk/pci_ids.h 00:03:35.599 TEST_HEADER include/spdk/pipe.h 00:03:35.599 TEST_HEADER include/spdk/queue.h 00:03:35.599 TEST_HEADER include/spdk/reduce.h 00:03:35.599 TEST_HEADER include/spdk/rpc.h 00:03:35.599 TEST_HEADER include/spdk/scheduler.h 00:03:35.599 TEST_HEADER include/spdk/scsi.h 00:03:35.599 TEST_HEADER include/spdk/scsi_spec.h 00:03:35.599 TEST_HEADER include/spdk/sock.h 00:03:35.599 TEST_HEADER include/spdk/stdinc.h 00:03:35.599 TEST_HEADER include/spdk/string.h 00:03:35.599 TEST_HEADER include/spdk/thread.h 00:03:35.599 CC app/spdk_dd/spdk_dd.o 00:03:35.599 TEST_HEADER include/spdk/trace.h 00:03:35.599 TEST_HEADER include/spdk/trace_parser.h 00:03:35.599 TEST_HEADER include/spdk/tree.h 00:03:35.599 TEST_HEADER include/spdk/ublk.h 00:03:35.599 TEST_HEADER include/spdk/util.h 00:03:35.599 TEST_HEADER include/spdk/uuid.h 00:03:35.599 TEST_HEADER include/spdk/version.h 00:03:35.599 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:35.599 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:35.599 TEST_HEADER include/spdk/vhost.h 00:03:35.599 TEST_HEADER include/spdk/vmd.h 00:03:35.599 TEST_HEADER include/spdk/xor.h 00:03:35.599 TEST_HEADER include/spdk/zipf.h 00:03:35.599 CXX test/cpp_headers/accel.o 00:03:35.599 CXX test/cpp_headers/accel_module.o 00:03:35.599 CXX test/cpp_headers/assert.o 00:03:35.599 CXX test/cpp_headers/barrier.o 00:03:35.599 CXX test/cpp_headers/base64.o 00:03:35.599 CXX test/cpp_headers/bdev.o 00:03:35.599 CXX test/cpp_headers/bdev_module.o 00:03:35.599 CXX test/cpp_headers/bdev_zone.o 00:03:35.599 CXX test/cpp_headers/bit_array.o 00:03:35.599 CXX test/cpp_headers/bit_pool.o 00:03:35.599 CXX test/cpp_headers/blob_bdev.o 00:03:35.599 CXX test/cpp_headers/blobfs_bdev.o 00:03:35.599 CXX test/cpp_headers/blobfs.o 00:03:35.599 CXX test/cpp_headers/blob.o 00:03:35.599 CXX test/cpp_headers/conf.o 00:03:35.599 CXX test/cpp_headers/config.o 00:03:35.599 CXX test/cpp_headers/cpuset.o 00:03:35.599 CXX test/cpp_headers/crc16.o 00:03:35.599 CC app/nvmf_tgt/nvmf_main.o 00:03:35.599 CC app/iscsi_tgt/iscsi_tgt.o 00:03:35.599 CXX test/cpp_headers/crc32.o 00:03:35.599 CC examples/util/zipf/zipf.o 00:03:35.599 CC examples/ioat/perf/perf.o 00:03:35.599 CC test/thread/poller_perf/poller_perf.o 00:03:35.599 CC test/app/jsoncat/jsoncat.o 00:03:35.599 CC test/env/pci/pci_ut.o 00:03:35.599 CC examples/ioat/verify/verify.o 00:03:35.599 CC test/env/memory/memory_ut.o 00:03:35.599 CC test/app/stub/stub.o 00:03:35.599 CC test/app/histogram_perf/histogram_perf.o 00:03:35.599 CC test/env/vtophys/vtophys.o 00:03:35.599 CC app/spdk_tgt/spdk_tgt.o 00:03:35.599 CC app/fio/nvme/fio_plugin.o 00:03:35.599 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:35.859 CC test/dma/test_dma/test_dma.o 00:03:35.859 CC app/fio/bdev/fio_plugin.o 00:03:35.859 CC test/app/bdev_svc/bdev_svc.o 00:03:35.859 LINK spdk_lspci 00:03:35.859 CC test/env/mem_callbacks/mem_callbacks.o 00:03:35.859 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:35.859 LINK rpc_client_test 00:03:35.859 LINK spdk_nvme_discover 00:03:35.859 LINK jsoncat 00:03:35.860 LINK zipf 00:03:36.124 LINK vtophys 00:03:36.124 LINK interrupt_tgt 00:03:36.124 CXX test/cpp_headers/crc64.o 00:03:36.124 CXX test/cpp_headers/dif.o 00:03:36.124 LINK poller_perf 00:03:36.124 LINK histogram_perf 00:03:36.124 CXX test/cpp_headers/dma.o 00:03:36.124 CXX test/cpp_headers/endian.o 00:03:36.124 CXX test/cpp_headers/env_dpdk.o 00:03:36.124 CXX test/cpp_headers/env.o 00:03:36.124 CXX test/cpp_headers/event.o 00:03:36.124 LINK nvmf_tgt 00:03:36.124 CXX test/cpp_headers/fd_group.o 00:03:36.124 CXX test/cpp_headers/fd.o 00:03:36.124 LINK env_dpdk_post_init 00:03:36.124 CXX test/cpp_headers/file.o 00:03:36.124 LINK spdk_trace_record 00:03:36.124 LINK stub 00:03:36.124 CXX test/cpp_headers/ftl.o 00:03:36.124 CXX test/cpp_headers/gpt_spec.o 00:03:36.124 CXX test/cpp_headers/hexlify.o 00:03:36.124 LINK iscsi_tgt 00:03:36.124 CXX test/cpp_headers/histogram_data.o 00:03:36.124 CXX test/cpp_headers/idxd.o 00:03:36.124 LINK ioat_perf 00:03:36.124 LINK bdev_svc 00:03:36.124 CXX test/cpp_headers/idxd_spec.o 00:03:36.124 CXX test/cpp_headers/init.o 00:03:36.124 LINK spdk_tgt 00:03:36.124 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:36.124 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:36.124 LINK verify 00:03:36.124 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:36.383 CXX test/cpp_headers/ioat.o 00:03:36.383 CXX test/cpp_headers/ioat_spec.o 00:03:36.383 CXX test/cpp_headers/iscsi_spec.o 00:03:36.383 CXX test/cpp_headers/json.o 00:03:36.383 CXX test/cpp_headers/jsonrpc.o 00:03:36.383 LINK spdk_dd 00:03:36.383 CXX test/cpp_headers/keyring.o 00:03:36.383 LINK spdk_trace 00:03:36.383 CXX test/cpp_headers/keyring_module.o 00:03:36.383 CXX test/cpp_headers/likely.o 00:03:36.383 CXX test/cpp_headers/log.o 00:03:36.383 CXX test/cpp_headers/lvol.o 00:03:36.383 CXX test/cpp_headers/memory.o 00:03:36.383 CXX test/cpp_headers/mmio.o 00:03:36.383 CXX test/cpp_headers/nbd.o 00:03:36.383 CXX test/cpp_headers/net.o 00:03:36.383 LINK pci_ut 00:03:36.383 CXX test/cpp_headers/notify.o 00:03:36.383 CXX test/cpp_headers/nvme.o 00:03:36.383 CXX test/cpp_headers/nvme_intel.o 00:03:36.383 CXX test/cpp_headers/nvme_ocssd.o 00:03:36.383 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:36.383 CXX test/cpp_headers/nvme_spec.o 00:03:36.383 CXX test/cpp_headers/nvme_zns.o 00:03:36.383 CXX test/cpp_headers/nvmf_cmd.o 00:03:36.647 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:36.647 LINK test_dma 00:03:36.647 CXX test/cpp_headers/nvmf.o 00:03:36.647 CXX test/cpp_headers/nvmf_spec.o 00:03:36.647 CXX test/cpp_headers/nvmf_transport.o 00:03:36.647 CXX test/cpp_headers/opal.o 00:03:36.647 CXX test/cpp_headers/opal_spec.o 00:03:36.647 LINK nvme_fuzz 00:03:36.647 CXX test/cpp_headers/pci_ids.o 00:03:36.647 CXX test/cpp_headers/pipe.o 00:03:36.647 CXX test/cpp_headers/queue.o 00:03:36.647 CC examples/sock/hello_world/hello_sock.o 00:03:36.647 CC examples/vmd/lsvmd/lsvmd.o 00:03:36.647 LINK spdk_bdev 00:03:36.647 CC test/event/event_perf/event_perf.o 00:03:36.647 CC test/event/reactor/reactor.o 00:03:36.647 CC examples/thread/thread/thread_ex.o 00:03:36.647 CC examples/vmd/led/led.o 00:03:36.647 CC examples/idxd/perf/perf.o 00:03:36.911 CXX test/cpp_headers/reduce.o 00:03:36.911 LINK spdk_nvme 00:03:36.911 CXX test/cpp_headers/rpc.o 00:03:36.911 CXX test/cpp_headers/scheduler.o 00:03:36.911 CXX test/cpp_headers/scsi.o 00:03:36.911 CXX test/cpp_headers/scsi_spec.o 00:03:36.911 CXX test/cpp_headers/sock.o 00:03:36.911 CXX test/cpp_headers/stdinc.o 00:03:36.911 CC test/event/reactor_perf/reactor_perf.o 00:03:36.911 CXX test/cpp_headers/string.o 00:03:36.911 CXX test/cpp_headers/thread.o 00:03:36.911 CXX test/cpp_headers/trace.o 00:03:36.911 CC test/event/app_repeat/app_repeat.o 00:03:36.911 CXX test/cpp_headers/trace_parser.o 00:03:36.911 CXX test/cpp_headers/tree.o 00:03:36.911 CXX test/cpp_headers/ublk.o 00:03:36.911 CXX test/cpp_headers/util.o 00:03:36.911 CXX test/cpp_headers/uuid.o 00:03:36.911 CXX test/cpp_headers/version.o 00:03:36.911 CC test/event/scheduler/scheduler.o 00:03:36.911 CXX test/cpp_headers/vfio_user_pci.o 00:03:36.911 CXX test/cpp_headers/vfio_user_spec.o 00:03:36.911 LINK mem_callbacks 00:03:36.911 CXX test/cpp_headers/vhost.o 00:03:36.911 CXX test/cpp_headers/vmd.o 00:03:36.911 CXX test/cpp_headers/xor.o 00:03:36.911 LINK lsvmd 00:03:36.911 CXX test/cpp_headers/zipf.o 00:03:36.911 CC app/vhost/vhost.o 00:03:36.911 LINK reactor 00:03:36.911 LINK event_perf 00:03:37.173 LINK led 00:03:37.173 LINK vhost_fuzz 00:03:37.173 LINK spdk_top 00:03:37.173 LINK spdk_nvme_perf 00:03:37.173 LINK spdk_nvme_identify 00:03:37.173 LINK reactor_perf 00:03:37.173 LINK hello_sock 00:03:37.173 CC test/nvme/aer/aer.o 00:03:37.173 CC test/nvme/err_injection/err_injection.o 00:03:37.173 CC test/nvme/startup/startup.o 00:03:37.173 CC test/nvme/overhead/overhead.o 00:03:37.173 CC test/nvme/reset/reset.o 00:03:37.173 CC test/nvme/sgl/sgl.o 00:03:37.173 LINK thread 00:03:37.173 CC test/nvme/e2edp/nvme_dp.o 00:03:37.173 LINK app_repeat 00:03:37.173 CC test/nvme/reserve/reserve.o 00:03:37.431 CC test/accel/dif/dif.o 00:03:37.431 CC test/blobfs/mkfs/mkfs.o 00:03:37.431 CC test/nvme/simple_copy/simple_copy.o 00:03:37.431 CC test/nvme/connect_stress/connect_stress.o 00:03:37.431 CC test/nvme/fused_ordering/fused_ordering.o 00:03:37.431 CC test/nvme/boot_partition/boot_partition.o 00:03:37.431 CC test/nvme/compliance/nvme_compliance.o 00:03:37.431 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:37.431 CC test/nvme/fdp/fdp.o 00:03:37.431 CC test/nvme/cuse/cuse.o 00:03:37.431 CC test/lvol/esnap/esnap.o 00:03:37.431 LINK idxd_perf 00:03:37.431 LINK vhost 00:03:37.431 LINK scheduler 00:03:37.690 LINK err_injection 00:03:37.690 LINK mkfs 00:03:37.690 LINK boot_partition 00:03:37.690 LINK startup 00:03:37.690 LINK sgl 00:03:37.690 LINK fused_ordering 00:03:37.690 LINK nvme_dp 00:03:37.690 CC examples/nvme/abort/abort.o 00:03:37.690 CC examples/nvme/reconnect/reconnect.o 00:03:37.690 LINK aer 00:03:37.690 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:37.690 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:37.690 CC examples/nvme/hotplug/hotplug.o 00:03:37.690 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:37.690 CC examples/nvme/arbitration/arbitration.o 00:03:37.690 CC examples/nvme/hello_world/hello_world.o 00:03:37.690 LINK connect_stress 00:03:37.690 LINK reset 00:03:37.690 LINK simple_copy 00:03:37.690 LINK reserve 00:03:37.690 LINK doorbell_aers 00:03:37.690 LINK overhead 00:03:37.690 LINK fdp 00:03:37.690 LINK nvme_compliance 00:03:37.948 LINK memory_ut 00:03:37.948 CC examples/accel/perf/accel_perf.o 00:03:37.948 CC examples/blob/cli/blobcli.o 00:03:37.948 LINK pmr_persistence 00:03:37.948 CC examples/blob/hello_world/hello_blob.o 00:03:37.948 LINK hello_world 00:03:37.948 LINK dif 00:03:37.948 LINK cmb_copy 00:03:37.948 LINK hotplug 00:03:38.206 LINK reconnect 00:03:38.206 LINK arbitration 00:03:38.206 LINK abort 00:03:38.206 LINK hello_blob 00:03:38.206 LINK nvme_manage 00:03:38.463 LINK accel_perf 00:03:38.464 CC test/bdev/bdevio/bdevio.o 00:03:38.464 LINK blobcli 00:03:38.721 LINK iscsi_fuzz 00:03:38.721 CC examples/bdev/hello_world/hello_bdev.o 00:03:38.721 CC examples/bdev/bdevperf/bdevperf.o 00:03:38.721 LINK bdevio 00:03:38.979 LINK cuse 00:03:38.979 LINK hello_bdev 00:03:39.545 LINK bdevperf 00:03:39.803 CC examples/nvmf/nvmf/nvmf.o 00:03:40.061 LINK nvmf 00:03:42.589 LINK esnap 00:03:42.847 00:03:42.847 real 0m41.360s 00:03:42.847 user 7m28.702s 00:03:42.847 sys 1m48.461s 00:03:42.847 01:39:24 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:42.847 01:39:24 make -- common/autotest_common.sh@10 -- $ set +x 00:03:42.847 ************************************ 00:03:42.847 END TEST make 00:03:42.847 ************************************ 00:03:42.847 01:39:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:42.847 01:39:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:42.847 01:39:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:42.847 01:39:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.847 01:39:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:42.847 01:39:24 -- pm/common@44 -- $ pid=2028401 00:03:42.847 01:39:24 -- pm/common@50 -- $ kill -TERM 2028401 00:03:42.847 01:39:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.847 01:39:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:42.847 01:39:24 -- pm/common@44 -- $ pid=2028403 00:03:42.847 01:39:24 -- pm/common@50 -- $ kill -TERM 2028403 00:03:42.847 01:39:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.847 01:39:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:42.847 01:39:24 -- pm/common@44 -- $ pid=2028405 00:03:42.847 01:39:24 -- pm/common@50 -- $ kill -TERM 2028405 00:03:42.847 01:39:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.847 01:39:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:42.847 01:39:24 -- pm/common@44 -- $ pid=2028433 00:03:42.847 01:39:24 -- pm/common@50 -- $ sudo -E kill -TERM 2028433 00:03:42.847 01:39:24 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:42.847 01:39:24 -- nvmf/common.sh@7 -- # uname -s 00:03:42.847 01:39:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:42.847 01:39:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:42.847 01:39:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:42.848 01:39:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:42.848 01:39:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:42.848 01:39:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:42.848 01:39:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:42.848 01:39:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:42.848 01:39:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:42.848 01:39:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:42.848 01:39:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:42.848 01:39:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:42.848 01:39:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:42.848 01:39:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:42.848 01:39:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:42.848 01:39:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:42.848 01:39:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:42.848 01:39:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:42.848 01:39:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:42.848 01:39:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:42.848 01:39:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.848 01:39:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.848 01:39:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.848 01:39:24 -- paths/export.sh@5 -- # export PATH 00:03:42.848 01:39:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.848 01:39:24 -- nvmf/common.sh@47 -- # : 0 00:03:42.848 01:39:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:42.848 01:39:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:42.848 01:39:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:42.848 01:39:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:42.848 01:39:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:42.848 01:39:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:42.848 01:39:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:42.848 01:39:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:42.848 01:39:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:42.848 01:39:24 -- spdk/autotest.sh@32 -- # uname -s 00:03:42.848 01:39:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:42.848 01:39:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:42.848 01:39:24 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:42.848 01:39:24 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:42.848 01:39:24 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:42.848 01:39:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:42.848 01:39:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:42.848 01:39:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:42.848 01:39:24 -- spdk/autotest.sh@48 -- # udevadm_pid=2104772 00:03:42.848 01:39:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:42.848 01:39:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:42.848 01:39:24 -- pm/common@17 -- # local monitor 00:03:42.848 01:39:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.848 01:39:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.848 01:39:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.848 01:39:24 -- pm/common@21 -- # date +%s 00:03:42.848 01:39:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.848 01:39:24 -- pm/common@21 -- # date +%s 00:03:42.848 01:39:24 -- pm/common@25 -- # sleep 1 00:03:42.848 01:39:24 -- pm/common@21 -- # date +%s 00:03:42.848 01:39:24 -- pm/common@21 -- # date +%s 00:03:42.848 01:39:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721950764 00:03:42.848 01:39:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721950764 00:03:42.848 01:39:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721950764 00:03:42.848 01:39:24 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721950764 00:03:42.848 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721950764_collect-vmstat.pm.log 00:03:42.848 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721950764_collect-cpu-load.pm.log 00:03:42.848 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721950764_collect-cpu-temp.pm.log 00:03:42.848 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721950764_collect-bmc-pm.bmc.pm.log 00:03:43.782 01:39:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:43.782 01:39:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:43.782 01:39:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:43.783 01:39:25 -- common/autotest_common.sh@10 -- # set +x 00:03:43.783 01:39:25 -- spdk/autotest.sh@59 -- # create_test_list 00:03:43.783 01:39:25 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:43.783 01:39:25 -- common/autotest_common.sh@10 -- # set +x 00:03:44.040 01:39:25 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:44.040 01:39:25 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:44.040 01:39:25 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:44.040 01:39:25 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:44.040 01:39:25 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:44.040 01:39:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:44.040 01:39:25 -- common/autotest_common.sh@1455 -- # uname 00:03:44.040 01:39:25 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:44.040 01:39:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:44.040 01:39:25 -- common/autotest_common.sh@1475 -- # uname 00:03:44.040 01:39:25 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:44.040 01:39:25 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:44.040 01:39:25 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:44.040 01:39:25 -- spdk/autotest.sh@72 -- # hash lcov 00:03:44.040 01:39:25 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:44.040 01:39:25 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:44.040 --rc lcov_branch_coverage=1 00:03:44.040 --rc lcov_function_coverage=1 00:03:44.040 --rc genhtml_branch_coverage=1 00:03:44.040 --rc genhtml_function_coverage=1 00:03:44.040 --rc genhtml_legend=1 00:03:44.040 --rc geninfo_all_blocks=1 00:03:44.040 ' 00:03:44.040 01:39:25 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:44.040 --rc lcov_branch_coverage=1 00:03:44.040 --rc lcov_function_coverage=1 00:03:44.040 --rc genhtml_branch_coverage=1 00:03:44.040 --rc genhtml_function_coverage=1 00:03:44.040 --rc genhtml_legend=1 00:03:44.040 --rc geninfo_all_blocks=1 00:03:44.040 ' 00:03:44.040 01:39:25 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:44.040 --rc lcov_branch_coverage=1 00:03:44.040 --rc lcov_function_coverage=1 00:03:44.040 --rc genhtml_branch_coverage=1 00:03:44.040 --rc genhtml_function_coverage=1 00:03:44.040 --rc genhtml_legend=1 00:03:44.040 --rc geninfo_all_blocks=1 00:03:44.040 --no-external' 00:03:44.040 01:39:25 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:44.040 --rc lcov_branch_coverage=1 00:03:44.040 --rc lcov_function_coverage=1 00:03:44.040 --rc genhtml_branch_coverage=1 00:03:44.040 --rc genhtml_function_coverage=1 00:03:44.040 --rc genhtml_legend=1 00:03:44.040 --rc geninfo_all_blocks=1 00:03:44.040 --no-external' 00:03:44.040 01:39:25 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:44.040 lcov: LCOV version 1.14 00:03:44.040 01:39:25 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:02.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:02.210 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:14.409 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:14.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:14.410 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:14.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:14.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:14.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:14.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:14.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:14.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:14.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:14.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:14.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:14.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:14.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:14.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:14.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:14.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:14.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:14.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:14.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:14.411 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:18.593 01:39:59 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:18.593 01:39:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:18.593 01:39:59 -- common/autotest_common.sh@10 -- # set +x 00:04:18.593 01:39:59 -- spdk/autotest.sh@91 -- # rm -f 00:04:18.593 01:39:59 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.157 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:19.157 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:19.157 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:19.157 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:19.157 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:19.157 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:19.157 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:19.157 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:19.157 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:19.157 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:19.157 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:19.157 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:19.157 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:19.157 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:19.157 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:19.157 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:19.415 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:19.415 01:40:01 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:19.415 01:40:01 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:19.415 01:40:01 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:19.415 01:40:01 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:19.415 01:40:01 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:19.415 01:40:01 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:19.415 01:40:01 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:19.415 01:40:01 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.415 01:40:01 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:19.415 01:40:01 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:19.415 01:40:01 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.415 01:40:01 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:19.415 01:40:01 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:19.415 01:40:01 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:19.415 01:40:01 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:19.415 No valid GPT data, bailing 00:04:19.415 01:40:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.415 01:40:01 -- scripts/common.sh@391 -- # pt= 00:04:19.415 01:40:01 -- scripts/common.sh@392 -- # return 1 00:04:19.415 01:40:01 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:19.415 1+0 records in 00:04:19.415 1+0 records out 00:04:19.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00193614 s, 542 MB/s 00:04:19.415 01:40:01 -- spdk/autotest.sh@118 -- # sync 00:04:19.415 01:40:01 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:19.415 01:40:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:19.415 01:40:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:21.325 01:40:03 -- spdk/autotest.sh@124 -- # uname -s 00:04:21.325 01:40:03 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:21.325 01:40:03 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:21.325 01:40:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.325 01:40:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.325 01:40:03 -- common/autotest_common.sh@10 -- # set +x 00:04:21.325 ************************************ 00:04:21.325 START TEST setup.sh 00:04:21.325 ************************************ 00:04:21.325 01:40:03 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:21.325 * Looking for test storage... 00:04:21.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:21.325 01:40:03 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:21.325 01:40:03 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:21.325 01:40:03 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:21.326 01:40:03 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.326 01:40:03 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.326 01:40:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:21.326 ************************************ 00:04:21.326 START TEST acl 00:04:21.326 ************************************ 00:04:21.326 01:40:03 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:21.326 * Looking for test storage... 00:04:21.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:21.326 01:40:03 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:21.326 01:40:03 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:21.326 01:40:03 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:21.326 01:40:03 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:21.326 01:40:03 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:21.326 01:40:03 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:21.326 01:40:03 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:21.326 01:40:03 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:21.326 01:40:03 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:21.326 01:40:03 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:21.326 01:40:03 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:21.326 01:40:03 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:21.326 01:40:03 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:21.326 01:40:03 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:21.326 01:40:03 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.326 01:40:03 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.695 01:40:04 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:22.695 01:40:04 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:22.695 01:40:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.695 01:40:04 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:22.695 01:40:04 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.695 01:40:04 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:24.070 Hugepages 00:04:24.070 node hugesize free / total 00:04:24.070 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 00:04:24.071 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:24.071 01:40:05 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:24.071 01:40:05 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.071 01:40:05 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.071 01:40:05 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:24.071 ************************************ 00:04:24.071 START TEST denied 00:04:24.071 ************************************ 00:04:24.071 01:40:05 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:24.071 01:40:05 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:24.071 01:40:05 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:24.071 01:40:05 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:24.071 01:40:05 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.071 01:40:05 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:25.448 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:25.448 01:40:07 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:25.448 01:40:07 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:25.448 01:40:07 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:25.448 01:40:07 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:25.448 01:40:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:25.448 01:40:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:25.448 01:40:07 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:25.448 01:40:07 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:25.448 01:40:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.448 01:40:07 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:28.015 00:04:28.015 real 0m3.776s 00:04:28.015 user 0m1.124s 00:04:28.015 sys 0m1.760s 00:04:28.015 01:40:09 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.015 01:40:09 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:28.015 ************************************ 00:04:28.015 END TEST denied 00:04:28.015 ************************************ 00:04:28.015 01:40:09 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:28.015 01:40:09 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.015 01:40:09 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.015 01:40:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:28.015 ************************************ 00:04:28.015 START TEST allowed 00:04:28.015 ************************************ 00:04:28.015 01:40:09 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:28.015 01:40:09 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:28.015 01:40:09 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:28.015 01:40:09 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:28.015 01:40:09 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.015 01:40:09 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:30.543 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:30.543 01:40:12 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:30.543 01:40:12 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:30.543 01:40:12 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:30.543 01:40:12 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.543 01:40:12 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:31.920 00:04:31.920 real 0m3.903s 00:04:31.920 user 0m1.058s 00:04:31.921 sys 0m1.695s 00:04:31.921 01:40:13 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.921 01:40:13 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:31.921 ************************************ 00:04:31.921 END TEST allowed 00:04:31.921 ************************************ 00:04:31.921 00:04:31.921 real 0m10.448s 00:04:31.921 user 0m3.275s 00:04:31.921 sys 0m5.188s 00:04:31.921 01:40:13 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.921 01:40:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:31.921 ************************************ 00:04:31.921 END TEST acl 00:04:31.921 ************************************ 00:04:31.921 01:40:13 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:31.921 01:40:13 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.921 01:40:13 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.921 01:40:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:31.921 ************************************ 00:04:31.921 START TEST hugepages 00:04:31.921 ************************************ 00:04:31.921 01:40:13 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:31.921 * Looking for test storage... 00:04:31.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 41243500 kB' 'MemAvailable: 44734144 kB' 'Buffers: 2704 kB' 'Cached: 12759960 kB' 'SwapCached: 0 kB' 'Active: 9746848 kB' 'Inactive: 3491988 kB' 'Active(anon): 9354064 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479316 kB' 'Mapped: 193040 kB' 'Shmem: 8877892 kB' 'KReclaimable: 199420 kB' 'Slab: 569072 kB' 'SReclaimable: 199420 kB' 'SUnreclaim: 369652 kB' 'KernelStack: 12720 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 10440848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.921 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:31.922 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:31.923 01:40:13 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:31.923 01:40:13 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.923 01:40:13 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.923 01:40:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:31.923 ************************************ 00:04:31.923 START TEST default_setup 00:04:31.923 ************************************ 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.923 01:40:13 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:33.295 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:33.296 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:33.296 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:33.296 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:33.296 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:33.296 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:33.296 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:33.296 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:33.296 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:33.296 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:33.296 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:33.296 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:33.296 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:33.296 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:33.296 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:33.296 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:34.234 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43375000 kB' 'MemAvailable: 46865584 kB' 'Buffers: 2704 kB' 'Cached: 12760044 kB' 'SwapCached: 0 kB' 'Active: 9765784 kB' 'Inactive: 3491988 kB' 'Active(anon): 9373000 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498244 kB' 'Mapped: 193064 kB' 'Shmem: 8877976 kB' 'KReclaimable: 199300 kB' 'Slab: 568292 kB' 'SReclaimable: 199300 kB' 'SUnreclaim: 368992 kB' 'KernelStack: 12576 kB' 'PageTables: 7728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10462100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.234 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43375104 kB' 'MemAvailable: 46865688 kB' 'Buffers: 2704 kB' 'Cached: 12760052 kB' 'SwapCached: 0 kB' 'Active: 9765820 kB' 'Inactive: 3491988 kB' 'Active(anon): 9373036 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498364 kB' 'Mapped: 193148 kB' 'Shmem: 8877984 kB' 'KReclaimable: 199300 kB' 'Slab: 568340 kB' 'SReclaimable: 199300 kB' 'SUnreclaim: 369040 kB' 'KernelStack: 12688 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10463136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.235 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.236 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43374256 kB' 'MemAvailable: 46864840 kB' 'Buffers: 2704 kB' 'Cached: 12760068 kB' 'SwapCached: 0 kB' 'Active: 9768260 kB' 'Inactive: 3491988 kB' 'Active(anon): 9375476 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500812 kB' 'Mapped: 193824 kB' 'Shmem: 8878000 kB' 'KReclaimable: 199300 kB' 'Slab: 568380 kB' 'SReclaimable: 199300 kB' 'SUnreclaim: 369080 kB' 'KernelStack: 12688 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10465736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.237 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.238 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:34.239 nr_hugepages=1024 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.239 resv_hugepages=0 00:04:34.239 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.239 surplus_hugepages=0 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.500 anon_hugepages=0 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43368076 kB' 'MemAvailable: 46858660 kB' 'Buffers: 2704 kB' 'Cached: 12760092 kB' 'SwapCached: 0 kB' 'Active: 9770960 kB' 'Inactive: 3491988 kB' 'Active(anon): 9378176 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503500 kB' 'Mapped: 193852 kB' 'Shmem: 8878024 kB' 'KReclaimable: 199300 kB' 'Slab: 568380 kB' 'SReclaimable: 199300 kB' 'SUnreclaim: 369080 kB' 'KernelStack: 12704 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10468280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195972 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.500 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.501 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20903568 kB' 'MemUsed: 11973372 kB' 'SwapCached: 0 kB' 'Active: 5593448 kB' 'Inactive: 3354812 kB' 'Active(anon): 5321556 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8824832 kB' 'Mapped: 84908 kB' 'AnonPages: 126612 kB' 'Shmem: 5198128 kB' 'KernelStack: 6456 kB' 'PageTables: 3160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88244 kB' 'Slab: 290408 kB' 'SReclaimable: 88244 kB' 'SUnreclaim: 202164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.502 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:34.503 node0=1024 expecting 1024 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:34.503 00:04:34.503 real 0m2.459s 00:04:34.503 user 0m0.698s 00:04:34.503 sys 0m0.876s 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.503 01:40:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:34.503 ************************************ 00:04:34.503 END TEST default_setup 00:04:34.503 ************************************ 00:04:34.503 01:40:16 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:34.503 01:40:16 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.503 01:40:16 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.503 01:40:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.503 ************************************ 00:04:34.503 START TEST per_node_1G_alloc 00:04:34.503 ************************************ 00:04:34.503 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:34.503 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:34.503 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.504 01:40:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.438 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:35.438 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:35.438 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:35.438 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:35.438 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:35.438 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:35.438 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:35.438 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:35.438 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:35.438 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:35.438 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:35.438 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:35.438 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:35.438 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:35.438 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:35.438 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:35.438 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.702 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43333148 kB' 'MemAvailable: 46823732 kB' 'Buffers: 2704 kB' 'Cached: 12760168 kB' 'SwapCached: 0 kB' 'Active: 9765356 kB' 'Inactive: 3491988 kB' 'Active(anon): 9372572 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497844 kB' 'Mapped: 193624 kB' 'Shmem: 8878100 kB' 'KReclaimable: 199300 kB' 'Slab: 568428 kB' 'SReclaimable: 199300 kB' 'SUnreclaim: 369128 kB' 'KernelStack: 12704 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10462220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.703 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43333348 kB' 'MemAvailable: 46823932 kB' 'Buffers: 2704 kB' 'Cached: 12760168 kB' 'SwapCached: 0 kB' 'Active: 9765660 kB' 'Inactive: 3491988 kB' 'Active(anon): 9372876 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498128 kB' 'Mapped: 193152 kB' 'Shmem: 8878100 kB' 'KReclaimable: 199300 kB' 'Slab: 568404 kB' 'SReclaimable: 199300 kB' 'SUnreclaim: 369104 kB' 'KernelStack: 12704 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10462236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.704 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.705 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43332808 kB' 'MemAvailable: 46823392 kB' 'Buffers: 2704 kB' 'Cached: 12760168 kB' 'SwapCached: 0 kB' 'Active: 9765736 kB' 'Inactive: 3491988 kB' 'Active(anon): 9372952 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498120 kB' 'Mapped: 193076 kB' 'Shmem: 8878100 kB' 'KReclaimable: 199300 kB' 'Slab: 568412 kB' 'SReclaimable: 199300 kB' 'SUnreclaim: 369112 kB' 'KernelStack: 12688 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10462260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.706 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.707 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.708 nr_hugepages=1024 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.708 resv_hugepages=0 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.708 surplus_hugepages=0 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.708 anon_hugepages=0 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.708 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43331924 kB' 'MemAvailable: 46822508 kB' 'Buffers: 2704 kB' 'Cached: 12760212 kB' 'SwapCached: 0 kB' 'Active: 9765548 kB' 'Inactive: 3491988 kB' 'Active(anon): 9372764 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497848 kB' 'Mapped: 193076 kB' 'Shmem: 8878144 kB' 'KReclaimable: 199300 kB' 'Slab: 568412 kB' 'SReclaimable: 199300 kB' 'SUnreclaim: 369112 kB' 'KernelStack: 12672 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10462280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.709 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21929400 kB' 'MemUsed: 10947540 kB' 'SwapCached: 0 kB' 'Active: 5593800 kB' 'Inactive: 3354812 kB' 'Active(anon): 5321908 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8824952 kB' 'Mapped: 84208 kB' 'AnonPages: 126872 kB' 'Shmem: 5198248 kB' 'KernelStack: 6472 kB' 'PageTables: 3212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88244 kB' 'Slab: 290468 kB' 'SReclaimable: 88244 kB' 'SUnreclaim: 202224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:35.710 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.711 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21402836 kB' 'MemUsed: 6261936 kB' 'SwapCached: 0 kB' 'Active: 4171852 kB' 'Inactive: 137176 kB' 'Active(anon): 4050960 kB' 'Inactive(anon): 0 kB' 'Active(file): 120892 kB' 'Inactive(file): 137176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3937988 kB' 'Mapped: 108868 kB' 'AnonPages: 371088 kB' 'Shmem: 3679920 kB' 'KernelStack: 6248 kB' 'PageTables: 4776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111056 kB' 'Slab: 277944 kB' 'SReclaimable: 111056 kB' 'SUnreclaim: 166888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.712 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:35.713 node0=512 expecting 512 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:35.713 node1=512 expecting 512 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:35.713 00:04:35.713 real 0m1.338s 00:04:35.713 user 0m0.574s 00:04:35.713 sys 0m0.727s 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.713 01:40:17 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:35.713 ************************************ 00:04:35.713 END TEST per_node_1G_alloc 00:04:35.714 ************************************ 00:04:35.714 01:40:17 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:35.714 01:40:17 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.714 01:40:17 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.714 01:40:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:35.972 ************************************ 00:04:35.972 START TEST even_2G_alloc 00:04:35.972 ************************************ 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.972 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:35.973 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:35.973 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:35.973 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.973 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:35.973 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:35.973 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:35.973 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.973 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:35.973 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:35.973 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:35.973 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.973 01:40:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:36.908 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:36.908 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:36.908 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:36.908 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:36.908 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:36.908 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:36.908 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:36.908 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:36.908 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:36.908 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:36.908 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:36.908 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:36.908 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:36.908 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:36.908 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:36.908 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:36.908 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43333232 kB' 'MemAvailable: 46823816 kB' 'Buffers: 2704 kB' 'Cached: 12760300 kB' 'SwapCached: 0 kB' 'Active: 9765600 kB' 'Inactive: 3491988 kB' 'Active(anon): 9372816 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497780 kB' 'Mapped: 193160 kB' 'Shmem: 8878232 kB' 'KReclaimable: 199300 kB' 'Slab: 568228 kB' 'SReclaimable: 199300 kB' 'SUnreclaim: 368928 kB' 'KernelStack: 12688 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10462644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.173 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.174 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43332980 kB' 'MemAvailable: 46823564 kB' 'Buffers: 2704 kB' 'Cached: 12760300 kB' 'SwapCached: 0 kB' 'Active: 9765996 kB' 'Inactive: 3491988 kB' 'Active(anon): 9373212 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498156 kB' 'Mapped: 193084 kB' 'Shmem: 8878232 kB' 'KReclaimable: 199300 kB' 'Slab: 568228 kB' 'SReclaimable: 199300 kB' 'SUnreclaim: 368928 kB' 'KernelStack: 12736 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10462660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.175 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.176 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.177 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:37.177 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.177 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.177 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.177 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.177 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.177 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.177 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.177 01:40:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43333308 kB' 'MemAvailable: 46823892 kB' 'Buffers: 2704 kB' 'Cached: 12760324 kB' 'SwapCached: 0 kB' 'Active: 9765848 kB' 'Inactive: 3491988 kB' 'Active(anon): 9373064 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498000 kB' 'Mapped: 193084 kB' 'Shmem: 8878256 kB' 'KReclaimable: 199300 kB' 'Slab: 568272 kB' 'SReclaimable: 199300 kB' 'SUnreclaim: 368972 kB' 'KernelStack: 12704 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10462680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.177 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.178 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:37.179 nr_hugepages=1024 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.179 resv_hugepages=0 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.179 surplus_hugepages=0 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.179 anon_hugepages=0 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43333308 kB' 'MemAvailable: 46823892 kB' 'Buffers: 2704 kB' 'Cached: 12760344 kB' 'SwapCached: 0 kB' 'Active: 9765924 kB' 'Inactive: 3491988 kB' 'Active(anon): 9373140 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498064 kB' 'Mapped: 193084 kB' 'Shmem: 8878276 kB' 'KReclaimable: 199300 kB' 'Slab: 568272 kB' 'SReclaimable: 199300 kB' 'SUnreclaim: 368972 kB' 'KernelStack: 12736 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10462704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.179 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.180 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21944880 kB' 'MemUsed: 10932060 kB' 'SwapCached: 0 kB' 'Active: 5593736 kB' 'Inactive: 3354812 kB' 'Active(anon): 5321844 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8825032 kB' 'Mapped: 84208 kB' 'AnonPages: 126644 kB' 'Shmem: 5198328 kB' 'KernelStack: 6440 kB' 'PageTables: 3112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88244 kB' 'Slab: 290464 kB' 'SReclaimable: 88244 kB' 'SUnreclaim: 202220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.181 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.182 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21388428 kB' 'MemUsed: 6276344 kB' 'SwapCached: 0 kB' 'Active: 4172220 kB' 'Inactive: 137176 kB' 'Active(anon): 4051328 kB' 'Inactive(anon): 0 kB' 'Active(file): 120892 kB' 'Inactive(file): 137176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3938060 kB' 'Mapped: 108876 kB' 'AnonPages: 371428 kB' 'Shmem: 3679992 kB' 'KernelStack: 6296 kB' 'PageTables: 4884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111056 kB' 'Slab: 277808 kB' 'SReclaimable: 111056 kB' 'SUnreclaim: 166752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.183 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:37.184 node0=512 expecting 512 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:37.184 node1=512 expecting 512 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:37.184 00:04:37.184 real 0m1.380s 00:04:37.184 user 0m0.582s 00:04:37.184 sys 0m0.757s 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.184 01:40:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:37.184 ************************************ 00:04:37.184 END TEST even_2G_alloc 00:04:37.184 ************************************ 00:04:37.184 01:40:19 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:37.184 01:40:19 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.184 01:40:19 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.184 01:40:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:37.184 ************************************ 00:04:37.184 START TEST odd_alloc 00:04:37.184 ************************************ 00:04:37.184 01:40:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:37.184 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:37.184 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:37.184 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:37.184 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.184 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:37.184 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:37.184 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:37.184 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.184 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:37.184 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.185 01:40:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:38.565 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:38.565 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:38.565 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:38.565 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:38.565 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:38.565 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:38.565 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:38.565 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:38.565 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:38.565 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:38.565 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:38.565 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:38.565 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:38.565 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:38.565 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:38.565 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:38.565 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43364128 kB' 'MemAvailable: 46854704 kB' 'Buffers: 2704 kB' 'Cached: 12760432 kB' 'SwapCached: 0 kB' 'Active: 9762372 kB' 'Inactive: 3491988 kB' 'Active(anon): 9369588 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494516 kB' 'Mapped: 192268 kB' 'Shmem: 8878364 kB' 'KReclaimable: 199284 kB' 'Slab: 567824 kB' 'SReclaimable: 199284 kB' 'SUnreclaim: 368540 kB' 'KernelStack: 12640 kB' 'PageTables: 7520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 10447416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.565 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.566 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43363876 kB' 'MemAvailable: 46854452 kB' 'Buffers: 2704 kB' 'Cached: 12760436 kB' 'SwapCached: 0 kB' 'Active: 9762596 kB' 'Inactive: 3491988 kB' 'Active(anon): 9369812 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494712 kB' 'Mapped: 192252 kB' 'Shmem: 8878368 kB' 'KReclaimable: 199284 kB' 'Slab: 567808 kB' 'SReclaimable: 199284 kB' 'SUnreclaim: 368524 kB' 'KernelStack: 12656 kB' 'PageTables: 7536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 10447432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.567 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.568 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43363700 kB' 'MemAvailable: 46854276 kB' 'Buffers: 2704 kB' 'Cached: 12760452 kB' 'SwapCached: 0 kB' 'Active: 9762608 kB' 'Inactive: 3491988 kB' 'Active(anon): 9369824 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494708 kB' 'Mapped: 192252 kB' 'Shmem: 8878384 kB' 'KReclaimable: 199284 kB' 'Slab: 567856 kB' 'SReclaimable: 199284 kB' 'SUnreclaim: 368572 kB' 'KernelStack: 12656 kB' 'PageTables: 7552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 10447452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.569 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:38.570 nr_hugepages=1025 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.570 resv_hugepages=0 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.570 surplus_hugepages=0 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.570 anon_hugepages=0 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.570 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43364052 kB' 'MemAvailable: 46854628 kB' 'Buffers: 2704 kB' 'Cached: 12760472 kB' 'SwapCached: 0 kB' 'Active: 9763368 kB' 'Inactive: 3491988 kB' 'Active(anon): 9370584 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495492 kB' 'Mapped: 192252 kB' 'Shmem: 8878404 kB' 'KReclaimable: 199284 kB' 'Slab: 567856 kB' 'SReclaimable: 199284 kB' 'SUnreclaim: 368572 kB' 'KernelStack: 12736 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 10447104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.571 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21950244 kB' 'MemUsed: 10926696 kB' 'SwapCached: 0 kB' 'Active: 5592988 kB' 'Inactive: 3354812 kB' 'Active(anon): 5321096 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8825036 kB' 'Mapped: 83604 kB' 'AnonPages: 125900 kB' 'Shmem: 5198332 kB' 'KernelStack: 6360 kB' 'PageTables: 2884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88244 kB' 'Slab: 290348 kB' 'SReclaimable: 88244 kB' 'SUnreclaim: 202104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.572 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.573 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21414072 kB' 'MemUsed: 6250700 kB' 'SwapCached: 0 kB' 'Active: 4169840 kB' 'Inactive: 137176 kB' 'Active(anon): 4048948 kB' 'Inactive(anon): 0 kB' 'Active(file): 120892 kB' 'Inactive(file): 137176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3938184 kB' 'Mapped: 108648 kB' 'AnonPages: 368988 kB' 'Shmem: 3680116 kB' 'KernelStack: 6216 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111040 kB' 'Slab: 277516 kB' 'SReclaimable: 111040 kB' 'SUnreclaim: 166476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.574 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:38.575 node0=512 expecting 513 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:38.575 node1=513 expecting 512 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:38.575 00:04:38.575 real 0m1.368s 00:04:38.575 user 0m0.539s 00:04:38.575 sys 0m0.786s 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.575 01:40:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:38.575 ************************************ 00:04:38.575 END TEST odd_alloc 00:04:38.575 ************************************ 00:04:38.575 01:40:20 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:38.575 01:40:20 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.575 01:40:20 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.575 01:40:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:38.575 ************************************ 00:04:38.575 START TEST custom_alloc 00:04:38.575 ************************************ 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.575 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.576 01:40:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:39.953 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:39.953 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:39.953 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:39.953 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:39.953 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:39.953 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:39.953 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:39.953 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:39.953 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.953 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:39.953 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:39.953 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:39.953 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:39.953 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:39.953 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:39.953 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:39.953 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.953 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:39.953 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:39.953 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.953 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42316276 kB' 'MemAvailable: 45806852 kB' 'Buffers: 2704 kB' 'Cached: 12760564 kB' 'SwapCached: 0 kB' 'Active: 9762784 kB' 'Inactive: 3491988 kB' 'Active(anon): 9370000 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494732 kB' 'Mapped: 192272 kB' 'Shmem: 8878496 kB' 'KReclaimable: 199284 kB' 'Slab: 567776 kB' 'SReclaimable: 199284 kB' 'SUnreclaim: 368492 kB' 'KernelStack: 12672 kB' 'PageTables: 7592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 10447808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.954 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42315956 kB' 'MemAvailable: 45806532 kB' 'Buffers: 2704 kB' 'Cached: 12760564 kB' 'SwapCached: 0 kB' 'Active: 9762884 kB' 'Inactive: 3491988 kB' 'Active(anon): 9370100 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494868 kB' 'Mapped: 192260 kB' 'Shmem: 8878496 kB' 'KReclaimable: 199284 kB' 'Slab: 567760 kB' 'SReclaimable: 199284 kB' 'SUnreclaim: 368476 kB' 'KernelStack: 12720 kB' 'PageTables: 7632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 10447824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.955 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42323116 kB' 'MemAvailable: 45813692 kB' 'Buffers: 2704 kB' 'Cached: 12760564 kB' 'SwapCached: 0 kB' 'Active: 9762852 kB' 'Inactive: 3491988 kB' 'Active(anon): 9370068 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494804 kB' 'Mapped: 192260 kB' 'Shmem: 8878496 kB' 'KReclaimable: 199284 kB' 'Slab: 567828 kB' 'SReclaimable: 199284 kB' 'SUnreclaim: 368544 kB' 'KernelStack: 12672 kB' 'PageTables: 7536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 10447844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.956 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:39.957 nr_hugepages=1536 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.957 resv_hugepages=0 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.957 surplus_hugepages=0 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.957 anon_hugepages=0 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42323396 kB' 'MemAvailable: 45813972 kB' 'Buffers: 2704 kB' 'Cached: 12760608 kB' 'SwapCached: 0 kB' 'Active: 9762804 kB' 'Inactive: 3491988 kB' 'Active(anon): 9370020 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494704 kB' 'Mapped: 192260 kB' 'Shmem: 8878540 kB' 'KReclaimable: 199284 kB' 'Slab: 567812 kB' 'SReclaimable: 199284 kB' 'SUnreclaim: 368528 kB' 'KernelStack: 12688 kB' 'PageTables: 7584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 10447868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.957 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21963944 kB' 'MemUsed: 10912996 kB' 'SwapCached: 0 kB' 'Active: 5593552 kB' 'Inactive: 3354812 kB' 'Active(anon): 5321660 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8825048 kB' 'Mapped: 83604 kB' 'AnonPages: 126508 kB' 'Shmem: 5198344 kB' 'KernelStack: 6472 kB' 'PageTables: 3224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88244 kB' 'Slab: 290344 kB' 'SReclaimable: 88244 kB' 'SUnreclaim: 202100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.958 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.959 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.218 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 20357688 kB' 'MemUsed: 7307084 kB' 'SwapCached: 0 kB' 'Active: 4169540 kB' 'Inactive: 137176 kB' 'Active(anon): 4048648 kB' 'Inactive(anon): 0 kB' 'Active(file): 120892 kB' 'Inactive(file): 137176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3938304 kB' 'Mapped: 108656 kB' 'AnonPages: 368476 kB' 'Shmem: 3680236 kB' 'KernelStack: 6232 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111040 kB' 'Slab: 277468 kB' 'SReclaimable: 111040 kB' 'SUnreclaim: 166428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:40.219 node0=512 expecting 512 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:40.219 node1=1024 expecting 1024 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:40.219 00:04:40.219 real 0m1.430s 00:04:40.219 user 0m0.610s 00:04:40.219 sys 0m0.781s 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.219 01:40:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:40.219 ************************************ 00:04:40.219 END TEST custom_alloc 00:04:40.219 ************************************ 00:04:40.219 01:40:22 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:40.219 01:40:22 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.219 01:40:22 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.219 01:40:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:40.219 ************************************ 00:04:40.219 START TEST no_shrink_alloc 00:04:40.219 ************************************ 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.219 01:40:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.153 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:41.153 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:41.153 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:41.153 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:41.153 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:41.153 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:41.153 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:41.153 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:41.153 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:41.153 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:41.153 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:41.153 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:41.153 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:41.153 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:41.153 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:41.153 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:41.153 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43349704 kB' 'MemAvailable: 46840280 kB' 'Buffers: 2704 kB' 'Cached: 12760692 kB' 'SwapCached: 0 kB' 'Active: 9769092 kB' 'Inactive: 3491988 kB' 'Active(anon): 9376308 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501052 kB' 'Mapped: 192712 kB' 'Shmem: 8878624 kB' 'KReclaimable: 199284 kB' 'Slab: 567824 kB' 'SReclaimable: 199284 kB' 'SUnreclaim: 368540 kB' 'KernelStack: 12736 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10454420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196132 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.418 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.419 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43352232 kB' 'MemAvailable: 46842808 kB' 'Buffers: 2704 kB' 'Cached: 12760696 kB' 'SwapCached: 0 kB' 'Active: 9763116 kB' 'Inactive: 3491988 kB' 'Active(anon): 9370332 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495040 kB' 'Mapped: 192680 kB' 'Shmem: 8878628 kB' 'KReclaimable: 199284 kB' 'Slab: 567828 kB' 'SReclaimable: 199284 kB' 'SUnreclaim: 368544 kB' 'KernelStack: 12752 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10448320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.420 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.421 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43352232 kB' 'MemAvailable: 46842808 kB' 'Buffers: 2704 kB' 'Cached: 12760712 kB' 'SwapCached: 0 kB' 'Active: 9763004 kB' 'Inactive: 3491988 kB' 'Active(anon): 9370220 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494928 kB' 'Mapped: 192276 kB' 'Shmem: 8878644 kB' 'KReclaimable: 199284 kB' 'Slab: 567820 kB' 'SReclaimable: 199284 kB' 'SUnreclaim: 368536 kB' 'KernelStack: 12720 kB' 'PageTables: 7560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10448340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.422 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.423 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:41.424 nr_hugepages=1024 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.424 resv_hugepages=0 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.424 surplus_hugepages=0 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.424 anon_hugepages=0 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43352232 kB' 'MemAvailable: 46842808 kB' 'Buffers: 2704 kB' 'Cached: 12760736 kB' 'SwapCached: 0 kB' 'Active: 9763184 kB' 'Inactive: 3491988 kB' 'Active(anon): 9370400 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495072 kB' 'Mapped: 192276 kB' 'Shmem: 8878668 kB' 'KReclaimable: 199284 kB' 'Slab: 567820 kB' 'SReclaimable: 199284 kB' 'SUnreclaim: 368536 kB' 'KernelStack: 12752 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10448364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.425 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20898276 kB' 'MemUsed: 11978664 kB' 'SwapCached: 0 kB' 'Active: 5592980 kB' 'Inactive: 3354812 kB' 'Active(anon): 5321088 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8825052 kB' 'Mapped: 83604 kB' 'AnonPages: 125840 kB' 'Shmem: 5198348 kB' 'KernelStack: 6424 kB' 'PageTables: 3096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88244 kB' 'Slab: 290316 kB' 'SReclaimable: 88244 kB' 'SUnreclaim: 202072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:41.427 node0=1024 expecting 1024 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.427 01:40:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.361 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:42.361 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:42.361 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:42.361 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:42.361 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:42.361 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:42.361 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:42.361 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:42.361 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:42.361 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:42.361 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:42.361 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:42.361 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:42.361 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:42.623 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:42.624 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:42.624 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:42.624 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43317628 kB' 'MemAvailable: 46808204 kB' 'Buffers: 2704 kB' 'Cached: 12760808 kB' 'SwapCached: 0 kB' 'Active: 9763480 kB' 'Inactive: 3491988 kB' 'Active(anon): 9370696 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495248 kB' 'Mapped: 192320 kB' 'Shmem: 8878740 kB' 'KReclaimable: 199284 kB' 'Slab: 567788 kB' 'SReclaimable: 199284 kB' 'SUnreclaim: 368504 kB' 'KernelStack: 12784 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10448548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.624 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43330124 kB' 'MemAvailable: 46820700 kB' 'Buffers: 2704 kB' 'Cached: 12760808 kB' 'SwapCached: 0 kB' 'Active: 9763568 kB' 'Inactive: 3491988 kB' 'Active(anon): 9370784 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495280 kB' 'Mapped: 192284 kB' 'Shmem: 8878740 kB' 'KReclaimable: 199284 kB' 'Slab: 567756 kB' 'SReclaimable: 199284 kB' 'SUnreclaim: 368472 kB' 'KernelStack: 12784 kB' 'PageTables: 7688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10448564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43330480 kB' 'MemAvailable: 46821056 kB' 'Buffers: 2704 kB' 'Cached: 12760828 kB' 'SwapCached: 0 kB' 'Active: 9763464 kB' 'Inactive: 3491988 kB' 'Active(anon): 9370680 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495128 kB' 'Mapped: 192284 kB' 'Shmem: 8878760 kB' 'KReclaimable: 199284 kB' 'Slab: 567828 kB' 'SReclaimable: 199284 kB' 'SUnreclaim: 368544 kB' 'KernelStack: 12768 kB' 'PageTables: 7612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10448588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.629 nr_hugepages=1024 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.629 resv_hugepages=0 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.629 surplus_hugepages=0 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.629 anon_hugepages=0 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.629 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43330388 kB' 'MemAvailable: 46820964 kB' 'Buffers: 2704 kB' 'Cached: 12760848 kB' 'SwapCached: 0 kB' 'Active: 9763484 kB' 'Inactive: 3491988 kB' 'Active(anon): 9370700 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491988 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495156 kB' 'Mapped: 192284 kB' 'Shmem: 8878780 kB' 'KReclaimable: 199284 kB' 'Slab: 567828 kB' 'SReclaimable: 199284 kB' 'SUnreclaim: 368544 kB' 'KernelStack: 12784 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 10448608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 14929920 kB' 'DirectMap1G: 52428800 kB' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:42.631 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20893932 kB' 'MemUsed: 11983008 kB' 'SwapCached: 0 kB' 'Active: 5592472 kB' 'Inactive: 3354812 kB' 'Active(anon): 5320580 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354812 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8825056 kB' 'Mapped: 83604 kB' 'AnonPages: 125304 kB' 'Shmem: 5198352 kB' 'KernelStack: 6424 kB' 'PageTables: 3048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88244 kB' 'Slab: 290364 kB' 'SReclaimable: 88244 kB' 'SUnreclaim: 202120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:42.923 node0=1024 expecting 1024 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:42.923 00:04:42.923 real 0m2.622s 00:04:42.923 user 0m1.126s 00:04:42.923 sys 0m1.409s 00:04:42.923 01:40:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.924 01:40:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:42.924 ************************************ 00:04:42.924 END TEST no_shrink_alloc 00:04:42.924 ************************************ 00:04:42.924 01:40:24 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:42.924 01:40:24 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:42.924 01:40:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:42.924 01:40:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.924 01:40:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:42.924 01:40:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.924 01:40:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:42.924 01:40:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:42.924 01:40:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.924 01:40:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:42.924 01:40:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.924 01:40:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:42.924 01:40:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:42.924 01:40:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:42.924 00:04:42.924 real 0m10.980s 00:04:42.924 user 0m4.280s 00:04:42.924 sys 0m5.588s 00:04:42.924 01:40:24 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.924 01:40:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:42.924 ************************************ 00:04:42.924 END TEST hugepages 00:04:42.924 ************************************ 00:04:42.924 01:40:24 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:42.924 01:40:24 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.924 01:40:24 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.924 01:40:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:42.924 ************************************ 00:04:42.924 START TEST driver 00:04:42.924 ************************************ 00:04:42.924 01:40:24 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:42.924 * Looking for test storage... 00:04:42.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:42.924 01:40:24 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:42.924 01:40:24 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.924 01:40:24 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:45.456 01:40:27 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:45.456 01:40:27 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.456 01:40:27 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.456 01:40:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:45.456 ************************************ 00:04:45.456 START TEST guess_driver 00:04:45.456 ************************************ 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:45.456 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:45.456 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:45.456 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:45.456 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:45.456 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:45.456 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:45.456 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:45.456 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:45.457 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:45.457 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:45.457 Looking for driver=vfio-pci 00:04:45.457 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:45.457 01:40:27 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:45.457 01:40:27 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.457 01:40:27 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.833 01:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.769 01:40:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.769 01:40:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.769 01:40:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.769 01:40:29 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:47.769 01:40:29 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:47.769 01:40:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.769 01:40:29 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.302 00:04:50.302 real 0m4.909s 00:04:50.302 user 0m1.142s 00:04:50.302 sys 0m1.873s 00:04:50.302 01:40:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.302 01:40:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:50.302 ************************************ 00:04:50.302 END TEST guess_driver 00:04:50.302 ************************************ 00:04:50.302 00:04:50.302 real 0m7.531s 00:04:50.302 user 0m1.776s 00:04:50.302 sys 0m2.871s 00:04:50.302 01:40:32 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.302 01:40:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:50.302 ************************************ 00:04:50.302 END TEST driver 00:04:50.302 ************************************ 00:04:50.302 01:40:32 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:50.302 01:40:32 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.302 01:40:32 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.302 01:40:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:50.561 ************************************ 00:04:50.561 START TEST devices 00:04:50.561 ************************************ 00:04:50.561 01:40:32 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:50.561 * Looking for test storage... 00:04:50.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:50.561 01:40:32 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:50.561 01:40:32 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:50.561 01:40:32 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.561 01:40:32 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:51.937 01:40:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:51.937 01:40:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:51.937 01:40:33 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:51.937 01:40:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:51.937 01:40:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:51.937 01:40:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:51.937 01:40:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:51.937 01:40:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:51.937 01:40:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:51.937 01:40:33 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:51.937 No valid GPT data, bailing 00:04:51.937 01:40:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:51.937 01:40:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:51.937 01:40:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:51.937 01:40:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:51.937 01:40:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:51.937 01:40:33 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:51.937 01:40:33 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:51.937 01:40:33 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.937 01:40:33 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.937 01:40:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:51.937 ************************************ 00:04:51.937 START TEST nvme_mount 00:04:51.937 ************************************ 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:51.937 01:40:33 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:52.873 Creating new GPT entries in memory. 00:04:52.873 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:52.873 other utilities. 00:04:52.873 01:40:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:52.873 01:40:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.873 01:40:34 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:52.873 01:40:34 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:52.873 01:40:34 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:54.250 Creating new GPT entries in memory. 00:04:54.250 The operation has completed successfully. 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2125012 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.250 01:40:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.186 01:40:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.186 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.186 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:55.186 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.186 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.186 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.186 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:55.186 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.186 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.186 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:55.186 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:55.186 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:55.186 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:55.186 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.444 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:55.444 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:55.444 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:55.444 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:55.444 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:55.444 01:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:55.444 01:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.444 01:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:55.444 01:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:55.445 01:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.703 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.703 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:55.703 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:55.703 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.703 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.703 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:55.703 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.703 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:55.703 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:55.703 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.703 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:55.703 01:40:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:55.703 01:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.703 01:40:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:56.639 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.898 01:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:57.832 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.832 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:57.832 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:57.832 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:57.833 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.093 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.093 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:58.093 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:58.093 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:58.093 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.093 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.093 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:58.093 01:40:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:58.093 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:58.093 00:04:58.093 real 0m6.145s 00:04:58.093 user 0m1.413s 00:04:58.093 sys 0m2.313s 00:04:58.093 01:40:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.093 01:40:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:58.093 ************************************ 00:04:58.093 END TEST nvme_mount 00:04:58.093 ************************************ 00:04:58.093 01:40:39 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:58.093 01:40:39 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.093 01:40:39 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.093 01:40:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:58.093 ************************************ 00:04:58.093 START TEST dm_mount 00:04:58.093 ************************************ 00:04:58.093 01:40:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:58.093 01:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:58.093 01:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:58.093 01:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:58.093 01:40:40 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:58.093 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:58.093 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:58.093 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:58.093 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:58.093 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:58.093 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:58.094 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:58.094 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.094 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.094 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:58.094 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.094 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.094 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:58.094 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.094 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:58.094 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:58.094 01:40:40 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:59.028 Creating new GPT entries in memory. 00:04:59.028 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:59.028 other utilities. 00:04:59.028 01:40:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:59.028 01:40:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.028 01:40:41 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.028 01:40:41 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.028 01:40:41 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:00.405 Creating new GPT entries in memory. 00:05:00.405 The operation has completed successfully. 00:05:00.405 01:40:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:00.405 01:40:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.405 01:40:42 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:00.405 01:40:42 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:00.405 01:40:42 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:01.340 The operation has completed successfully. 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2127395 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.340 01:40:43 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.273 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.531 01:40:44 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:03.463 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:03.464 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:03.722 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:03.722 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:03.722 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:03.722 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:03.722 01:40:45 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:03.722 00:05:03.722 real 0m5.478s 00:05:03.722 user 0m0.851s 00:05:03.722 sys 0m1.468s 00:05:03.722 01:40:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.722 01:40:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:03.722 ************************************ 00:05:03.722 END TEST dm_mount 00:05:03.722 ************************************ 00:05:03.722 01:40:45 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:03.722 01:40:45 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:03.722 01:40:45 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.722 01:40:45 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:03.722 01:40:45 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:03.722 01:40:45 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:03.722 01:40:45 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:03.981 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:03.981 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:03.981 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:03.981 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:03.981 01:40:45 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:03.981 01:40:45 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:03.981 01:40:45 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:03.981 01:40:45 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:03.981 01:40:45 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:03.981 01:40:45 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:03.981 01:40:45 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:03.981 00:05:03.981 real 0m13.481s 00:05:03.981 user 0m2.886s 00:05:03.981 sys 0m4.777s 00:05:03.981 01:40:45 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.981 01:40:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:03.981 ************************************ 00:05:03.981 END TEST devices 00:05:03.981 ************************************ 00:05:03.981 00:05:03.981 real 0m42.669s 00:05:03.981 user 0m12.317s 00:05:03.981 sys 0m18.571s 00:05:03.981 01:40:45 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.981 01:40:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:03.981 ************************************ 00:05:03.981 END TEST setup.sh 00:05:03.981 ************************************ 00:05:03.981 01:40:45 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:05.385 Hugepages 00:05:05.385 node hugesize free / total 00:05:05.385 node0 1048576kB 0 / 0 00:05:05.385 node0 2048kB 2048 / 2048 00:05:05.385 node1 1048576kB 0 / 0 00:05:05.385 node1 2048kB 0 / 0 00:05:05.385 00:05:05.385 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:05.385 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:05.385 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:05.385 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:05.385 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:05.385 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:05.385 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:05.385 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:05.386 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:05.386 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:05.386 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:05.386 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:05.386 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:05.386 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:05.386 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:05.386 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:05.386 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:05.386 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:05.386 01:40:47 -- spdk/autotest.sh@130 -- # uname -s 00:05:05.386 01:40:47 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:05.386 01:40:47 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:05.386 01:40:47 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:06.321 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:06.321 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:06.321 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:06.580 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:06.580 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:06.580 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:06.580 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:06.580 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:06.580 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:06.580 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:06.580 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:06.580 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:06.580 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:06.580 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:06.580 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:06.580 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:07.517 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:07.517 01:40:49 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:08.894 01:40:50 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:08.894 01:40:50 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:08.894 01:40:50 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:08.894 01:40:50 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:08.894 01:40:50 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:08.894 01:40:50 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:08.894 01:40:50 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:08.894 01:40:50 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:08.894 01:40:50 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:08.894 01:40:50 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:08.894 01:40:50 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:08.894 01:40:50 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:09.830 Waiting for block devices as requested 00:05:09.830 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:10.089 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:10.089 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:10.089 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:10.347 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:10.347 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:10.347 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:10.347 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:10.606 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:10.606 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:10.606 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:10.606 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:10.865 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:10.865 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:10.865 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:10.865 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:11.124 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:11.124 01:40:53 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:11.124 01:40:53 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:11.124 01:40:53 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:11.124 01:40:53 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:05:11.124 01:40:53 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:11.124 01:40:53 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:11.124 01:40:53 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:11.124 01:40:53 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:11.124 01:40:53 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:11.124 01:40:53 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:11.124 01:40:53 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:11.124 01:40:53 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:11.124 01:40:53 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:11.124 01:40:53 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:11.124 01:40:53 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:11.124 01:40:53 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:11.124 01:40:53 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:11.124 01:40:53 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:11.124 01:40:53 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:11.124 01:40:53 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:11.124 01:40:53 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:11.124 01:40:53 -- common/autotest_common.sh@1557 -- # continue 00:05:11.124 01:40:53 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:11.124 01:40:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.124 01:40:53 -- common/autotest_common.sh@10 -- # set +x 00:05:11.124 01:40:53 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:11.124 01:40:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.124 01:40:53 -- common/autotest_common.sh@10 -- # set +x 00:05:11.124 01:40:53 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.500 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:12.500 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:12.500 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:12.500 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:12.500 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:12.500 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:12.500 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:12.500 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:12.500 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:12.500 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:12.500 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:12.500 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:12.500 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:12.500 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:12.500 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:12.500 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:13.436 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:13.436 01:40:55 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:13.436 01:40:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:13.436 01:40:55 -- common/autotest_common.sh@10 -- # set +x 00:05:13.436 01:40:55 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:13.436 01:40:55 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:13.436 01:40:55 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:13.436 01:40:55 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:13.436 01:40:55 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:13.436 01:40:55 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:13.436 01:40:55 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:13.436 01:40:55 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:13.436 01:40:55 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.436 01:40:55 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:13.436 01:40:55 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:13.695 01:40:55 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:13.695 01:40:55 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:13.695 01:40:55 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:13.695 01:40:55 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:13.695 01:40:55 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:13.695 01:40:55 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:13.695 01:40:55 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:13.695 01:40:55 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:05:13.695 01:40:55 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:05:13.695 01:40:55 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2132579 00:05:13.695 01:40:55 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.695 01:40:55 -- common/autotest_common.sh@1598 -- # waitforlisten 2132579 00:05:13.695 01:40:55 -- common/autotest_common.sh@831 -- # '[' -z 2132579 ']' 00:05:13.695 01:40:55 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.695 01:40:55 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.695 01:40:55 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.695 01:40:55 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.695 01:40:55 -- common/autotest_common.sh@10 -- # set +x 00:05:13.695 [2024-07-26 01:40:55.549850] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:05:13.695 [2024-07-26 01:40:55.549951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2132579 ] 00:05:13.695 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.695 [2024-07-26 01:40:55.614050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.695 [2024-07-26 01:40:55.703441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.953 01:40:55 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.953 01:40:55 -- common/autotest_common.sh@864 -- # return 0 00:05:13.953 01:40:55 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:13.953 01:40:55 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:13.953 01:40:55 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:17.238 nvme0n1 00:05:17.238 01:40:59 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:17.497 [2024-07-26 01:40:59.261636] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:17.497 [2024-07-26 01:40:59.261685] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:17.497 request: 00:05:17.497 { 00:05:17.497 "nvme_ctrlr_name": "nvme0", 00:05:17.497 "password": "test", 00:05:17.497 "method": "bdev_nvme_opal_revert", 00:05:17.497 "req_id": 1 00:05:17.497 } 00:05:17.497 Got JSON-RPC error response 00:05:17.497 response: 00:05:17.497 { 00:05:17.497 "code": -32603, 00:05:17.497 "message": "Internal error" 00:05:17.497 } 00:05:17.497 01:40:59 -- common/autotest_common.sh@1604 -- # true 00:05:17.497 01:40:59 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:17.497 01:40:59 -- common/autotest_common.sh@1608 -- # killprocess 2132579 00:05:17.497 01:40:59 -- common/autotest_common.sh@950 -- # '[' -z 2132579 ']' 00:05:17.497 01:40:59 -- common/autotest_common.sh@954 -- # kill -0 2132579 00:05:17.497 01:40:59 -- common/autotest_common.sh@955 -- # uname 00:05:17.497 01:40:59 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:17.497 01:40:59 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2132579 00:05:17.497 01:40:59 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:17.497 01:40:59 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:17.497 01:40:59 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2132579' 00:05:17.497 killing process with pid 2132579 00:05:17.497 01:40:59 -- common/autotest_common.sh@969 -- # kill 2132579 00:05:17.497 01:40:59 -- common/autotest_common.sh@974 -- # wait 2132579 00:05:19.397 01:41:01 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:19.397 01:41:01 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:19.398 01:41:01 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:19.398 01:41:01 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:19.398 01:41:01 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:19.398 01:41:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:19.398 01:41:01 -- common/autotest_common.sh@10 -- # set +x 00:05:19.398 01:41:01 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:19.398 01:41:01 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:19.398 01:41:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.398 01:41:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.398 01:41:01 -- common/autotest_common.sh@10 -- # set +x 00:05:19.398 ************************************ 00:05:19.398 START TEST env 00:05:19.398 ************************************ 00:05:19.398 01:41:01 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:19.398 * Looking for test storage... 00:05:19.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:19.398 01:41:01 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:19.398 01:41:01 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.398 01:41:01 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.398 01:41:01 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.398 ************************************ 00:05:19.398 START TEST env_memory 00:05:19.398 ************************************ 00:05:19.398 01:41:01 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:19.398 00:05:19.398 00:05:19.398 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.398 http://cunit.sourceforge.net/ 00:05:19.398 00:05:19.398 00:05:19.398 Suite: memory 00:05:19.398 Test: alloc and free memory map ...[2024-07-26 01:41:01.194545] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:19.398 passed 00:05:19.398 Test: mem map translation ...[2024-07-26 01:41:01.215525] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:19.398 [2024-07-26 01:41:01.215548] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:19.398 [2024-07-26 01:41:01.215596] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:19.398 [2024-07-26 01:41:01.215608] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:19.398 passed 00:05:19.398 Test: mem map registration ...[2024-07-26 01:41:01.256164] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:19.398 [2024-07-26 01:41:01.256189] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:19.398 passed 00:05:19.398 Test: mem map adjacent registrations ...passed 00:05:19.398 00:05:19.398 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.398 suites 1 1 n/a 0 0 00:05:19.398 tests 4 4 4 0 0 00:05:19.398 asserts 152 152 152 0 n/a 00:05:19.398 00:05:19.398 Elapsed time = 0.142 seconds 00:05:19.398 00:05:19.398 real 0m0.150s 00:05:19.398 user 0m0.141s 00:05:19.398 sys 0m0.009s 00:05:19.398 01:41:01 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.398 01:41:01 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:19.398 ************************************ 00:05:19.398 END TEST env_memory 00:05:19.398 ************************************ 00:05:19.398 01:41:01 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:19.398 01:41:01 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.398 01:41:01 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.398 01:41:01 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.398 ************************************ 00:05:19.398 START TEST env_vtophys 00:05:19.398 ************************************ 00:05:19.398 01:41:01 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:19.398 EAL: lib.eal log level changed from notice to debug 00:05:19.398 EAL: Detected lcore 0 as core 0 on socket 0 00:05:19.398 EAL: Detected lcore 1 as core 1 on socket 0 00:05:19.398 EAL: Detected lcore 2 as core 2 on socket 0 00:05:19.398 EAL: Detected lcore 3 as core 3 on socket 0 00:05:19.398 EAL: Detected lcore 4 as core 4 on socket 0 00:05:19.398 EAL: Detected lcore 5 as core 5 on socket 0 00:05:19.398 EAL: Detected lcore 6 as core 8 on socket 0 00:05:19.398 EAL: Detected lcore 7 as core 9 on socket 0 00:05:19.398 EAL: Detected lcore 8 as core 10 on socket 0 00:05:19.398 EAL: Detected lcore 9 as core 11 on socket 0 00:05:19.398 EAL: Detected lcore 10 as core 12 on socket 0 00:05:19.398 EAL: Detected lcore 11 as core 13 on socket 0 00:05:19.398 EAL: Detected lcore 12 as core 0 on socket 1 00:05:19.398 EAL: Detected lcore 13 as core 1 on socket 1 00:05:19.398 EAL: Detected lcore 14 as core 2 on socket 1 00:05:19.398 EAL: Detected lcore 15 as core 3 on socket 1 00:05:19.398 EAL: Detected lcore 16 as core 4 on socket 1 00:05:19.398 EAL: Detected lcore 17 as core 5 on socket 1 00:05:19.398 EAL: Detected lcore 18 as core 8 on socket 1 00:05:19.398 EAL: Detected lcore 19 as core 9 on socket 1 00:05:19.398 EAL: Detected lcore 20 as core 10 on socket 1 00:05:19.398 EAL: Detected lcore 21 as core 11 on socket 1 00:05:19.398 EAL: Detected lcore 22 as core 12 on socket 1 00:05:19.398 EAL: Detected lcore 23 as core 13 on socket 1 00:05:19.398 EAL: Detected lcore 24 as core 0 on socket 0 00:05:19.398 EAL: Detected lcore 25 as core 1 on socket 0 00:05:19.398 EAL: Detected lcore 26 as core 2 on socket 0 00:05:19.398 EAL: Detected lcore 27 as core 3 on socket 0 00:05:19.398 EAL: Detected lcore 28 as core 4 on socket 0 00:05:19.398 EAL: Detected lcore 29 as core 5 on socket 0 00:05:19.398 EAL: Detected lcore 30 as core 8 on socket 0 00:05:19.398 EAL: Detected lcore 31 as core 9 on socket 0 00:05:19.398 EAL: Detected lcore 32 as core 10 on socket 0 00:05:19.398 EAL: Detected lcore 33 as core 11 on socket 0 00:05:19.398 EAL: Detected lcore 34 as core 12 on socket 0 00:05:19.398 EAL: Detected lcore 35 as core 13 on socket 0 00:05:19.398 EAL: Detected lcore 36 as core 0 on socket 1 00:05:19.398 EAL: Detected lcore 37 as core 1 on socket 1 00:05:19.398 EAL: Detected lcore 38 as core 2 on socket 1 00:05:19.398 EAL: Detected lcore 39 as core 3 on socket 1 00:05:19.398 EAL: Detected lcore 40 as core 4 on socket 1 00:05:19.398 EAL: Detected lcore 41 as core 5 on socket 1 00:05:19.398 EAL: Detected lcore 42 as core 8 on socket 1 00:05:19.398 EAL: Detected lcore 43 as core 9 on socket 1 00:05:19.398 EAL: Detected lcore 44 as core 10 on socket 1 00:05:19.398 EAL: Detected lcore 45 as core 11 on socket 1 00:05:19.398 EAL: Detected lcore 46 as core 12 on socket 1 00:05:19.398 EAL: Detected lcore 47 as core 13 on socket 1 00:05:19.398 EAL: Maximum logical cores by configuration: 128 00:05:19.398 EAL: Detected CPU lcores: 48 00:05:19.398 EAL: Detected NUMA nodes: 2 00:05:19.398 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:19.398 EAL: Detected shared linkage of DPDK 00:05:19.398 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:19.398 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:19.398 EAL: Registered [vdev] bus. 00:05:19.398 EAL: bus.vdev log level changed from disabled to notice 00:05:19.398 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:19.398 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:19.398 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:19.398 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:19.398 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:19.398 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:19.398 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:19.398 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:19.398 EAL: No shared files mode enabled, IPC will be disabled 00:05:19.398 EAL: No shared files mode enabled, IPC is disabled 00:05:19.398 EAL: Bus pci wants IOVA as 'DC' 00:05:19.398 EAL: Bus vdev wants IOVA as 'DC' 00:05:19.398 EAL: Buses did not request a specific IOVA mode. 00:05:19.398 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:19.398 EAL: Selected IOVA mode 'VA' 00:05:19.398 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.398 EAL: Probing VFIO support... 00:05:19.398 EAL: IOMMU type 1 (Type 1) is supported 00:05:19.398 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:19.398 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:19.398 EAL: VFIO support initialized 00:05:19.398 EAL: Ask a virtual area of 0x2e000 bytes 00:05:19.398 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:19.398 EAL: Setting up physically contiguous memory... 00:05:19.398 EAL: Setting maximum number of open files to 524288 00:05:19.398 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:19.398 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:19.398 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:19.398 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.398 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:19.399 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.399 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.399 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:19.399 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:19.399 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.399 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:19.399 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.399 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.399 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:19.399 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:19.399 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.399 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:19.399 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.399 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.399 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:19.399 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:19.399 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.399 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:19.399 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.399 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.399 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:19.399 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:19.399 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:19.399 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.399 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:19.399 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.399 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.399 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:19.399 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:19.399 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.399 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:19.399 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.399 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.399 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:19.399 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:19.399 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.399 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:19.399 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.399 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.399 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:19.399 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:19.399 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.399 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:19.399 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.399 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.399 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:19.399 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:19.399 EAL: Hugepages will be freed exactly as allocated. 00:05:19.399 EAL: No shared files mode enabled, IPC is disabled 00:05:19.399 EAL: No shared files mode enabled, IPC is disabled 00:05:19.399 EAL: TSC frequency is ~2700000 KHz 00:05:19.399 EAL: Main lcore 0 is ready (tid=7f4f2e600a00;cpuset=[0]) 00:05:19.399 EAL: Trying to obtain current memory policy. 00:05:19.399 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.399 EAL: Restoring previous memory policy: 0 00:05:19.399 EAL: request: mp_malloc_sync 00:05:19.399 EAL: No shared files mode enabled, IPC is disabled 00:05:19.399 EAL: Heap on socket 0 was expanded by 2MB 00:05:19.399 EAL: No shared files mode enabled, IPC is disabled 00:05:19.657 EAL: No shared files mode enabled, IPC is disabled 00:05:19.657 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:19.657 EAL: Mem event callback 'spdk:(nil)' registered 00:05:19.657 00:05:19.657 00:05:19.657 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.657 http://cunit.sourceforge.net/ 00:05:19.657 00:05:19.657 00:05:19.657 Suite: components_suite 00:05:19.657 Test: vtophys_malloc_test ...passed 00:05:19.657 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:19.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.657 EAL: Restoring previous memory policy: 4 00:05:19.657 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.657 EAL: request: mp_malloc_sync 00:05:19.657 EAL: No shared files mode enabled, IPC is disabled 00:05:19.657 EAL: Heap on socket 0 was expanded by 4MB 00:05:19.657 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.657 EAL: request: mp_malloc_sync 00:05:19.657 EAL: No shared files mode enabled, IPC is disabled 00:05:19.657 EAL: Heap on socket 0 was shrunk by 4MB 00:05:19.657 EAL: Trying to obtain current memory policy. 00:05:19.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.657 EAL: Restoring previous memory policy: 4 00:05:19.657 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.657 EAL: request: mp_malloc_sync 00:05:19.657 EAL: No shared files mode enabled, IPC is disabled 00:05:19.657 EAL: Heap on socket 0 was expanded by 6MB 00:05:19.657 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.657 EAL: request: mp_malloc_sync 00:05:19.657 EAL: No shared files mode enabled, IPC is disabled 00:05:19.657 EAL: Heap on socket 0 was shrunk by 6MB 00:05:19.657 EAL: Trying to obtain current memory policy. 00:05:19.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.657 EAL: Restoring previous memory policy: 4 00:05:19.657 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.657 EAL: request: mp_malloc_sync 00:05:19.657 EAL: No shared files mode enabled, IPC is disabled 00:05:19.657 EAL: Heap on socket 0 was expanded by 10MB 00:05:19.657 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.657 EAL: request: mp_malloc_sync 00:05:19.657 EAL: No shared files mode enabled, IPC is disabled 00:05:19.657 EAL: Heap on socket 0 was shrunk by 10MB 00:05:19.657 EAL: Trying to obtain current memory policy. 00:05:19.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.657 EAL: Restoring previous memory policy: 4 00:05:19.657 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.657 EAL: request: mp_malloc_sync 00:05:19.657 EAL: No shared files mode enabled, IPC is disabled 00:05:19.657 EAL: Heap on socket 0 was expanded by 18MB 00:05:19.657 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.657 EAL: request: mp_malloc_sync 00:05:19.657 EAL: No shared files mode enabled, IPC is disabled 00:05:19.657 EAL: Heap on socket 0 was shrunk by 18MB 00:05:19.657 EAL: Trying to obtain current memory policy. 00:05:19.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.657 EAL: Restoring previous memory policy: 4 00:05:19.657 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.657 EAL: request: mp_malloc_sync 00:05:19.657 EAL: No shared files mode enabled, IPC is disabled 00:05:19.657 EAL: Heap on socket 0 was expanded by 34MB 00:05:19.657 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.657 EAL: request: mp_malloc_sync 00:05:19.657 EAL: No shared files mode enabled, IPC is disabled 00:05:19.657 EAL: Heap on socket 0 was shrunk by 34MB 00:05:19.657 EAL: Trying to obtain current memory policy. 00:05:19.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.657 EAL: Restoring previous memory policy: 4 00:05:19.657 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.657 EAL: request: mp_malloc_sync 00:05:19.657 EAL: No shared files mode enabled, IPC is disabled 00:05:19.657 EAL: Heap on socket 0 was expanded by 66MB 00:05:19.658 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.658 EAL: request: mp_malloc_sync 00:05:19.658 EAL: No shared files mode enabled, IPC is disabled 00:05:19.658 EAL: Heap on socket 0 was shrunk by 66MB 00:05:19.658 EAL: Trying to obtain current memory policy. 00:05:19.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.658 EAL: Restoring previous memory policy: 4 00:05:19.658 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.658 EAL: request: mp_malloc_sync 00:05:19.658 EAL: No shared files mode enabled, IPC is disabled 00:05:19.658 EAL: Heap on socket 0 was expanded by 130MB 00:05:19.658 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.658 EAL: request: mp_malloc_sync 00:05:19.658 EAL: No shared files mode enabled, IPC is disabled 00:05:19.658 EAL: Heap on socket 0 was shrunk by 130MB 00:05:19.658 EAL: Trying to obtain current memory policy. 00:05:19.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.658 EAL: Restoring previous memory policy: 4 00:05:19.658 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.658 EAL: request: mp_malloc_sync 00:05:19.658 EAL: No shared files mode enabled, IPC is disabled 00:05:19.658 EAL: Heap on socket 0 was expanded by 258MB 00:05:19.915 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.915 EAL: request: mp_malloc_sync 00:05:19.915 EAL: No shared files mode enabled, IPC is disabled 00:05:19.915 EAL: Heap on socket 0 was shrunk by 258MB 00:05:19.915 EAL: Trying to obtain current memory policy. 00:05:19.915 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.915 EAL: Restoring previous memory policy: 4 00:05:19.915 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.915 EAL: request: mp_malloc_sync 00:05:19.915 EAL: No shared files mode enabled, IPC is disabled 00:05:19.915 EAL: Heap on socket 0 was expanded by 514MB 00:05:20.173 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.173 EAL: request: mp_malloc_sync 00:05:20.173 EAL: No shared files mode enabled, IPC is disabled 00:05:20.173 EAL: Heap on socket 0 was shrunk by 514MB 00:05:20.173 EAL: Trying to obtain current memory policy. 00:05:20.173 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.429 EAL: Restoring previous memory policy: 4 00:05:20.429 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.429 EAL: request: mp_malloc_sync 00:05:20.429 EAL: No shared files mode enabled, IPC is disabled 00:05:20.429 EAL: Heap on socket 0 was expanded by 1026MB 00:05:20.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.942 EAL: request: mp_malloc_sync 00:05:20.942 EAL: No shared files mode enabled, IPC is disabled 00:05:20.942 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:20.942 passed 00:05:20.942 00:05:20.942 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.942 suites 1 1 n/a 0 0 00:05:20.942 tests 2 2 2 0 0 00:05:20.942 asserts 497 497 497 0 n/a 00:05:20.942 00:05:20.942 Elapsed time = 1.369 seconds 00:05:20.942 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.942 EAL: request: mp_malloc_sync 00:05:20.942 EAL: No shared files mode enabled, IPC is disabled 00:05:20.942 EAL: Heap on socket 0 was shrunk by 2MB 00:05:20.942 EAL: No shared files mode enabled, IPC is disabled 00:05:20.942 EAL: No shared files mode enabled, IPC is disabled 00:05:20.942 EAL: No shared files mode enabled, IPC is disabled 00:05:20.942 00:05:20.942 real 0m1.493s 00:05:20.942 user 0m0.860s 00:05:20.942 sys 0m0.592s 00:05:20.942 01:41:02 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.942 01:41:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:20.942 ************************************ 00:05:20.942 END TEST env_vtophys 00:05:20.942 ************************************ 00:05:20.942 01:41:02 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:20.942 01:41:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.942 01:41:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.942 01:41:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.942 ************************************ 00:05:20.942 START TEST env_pci 00:05:20.942 ************************************ 00:05:20.942 01:41:02 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:20.942 00:05:20.942 00:05:20.942 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.942 http://cunit.sourceforge.net/ 00:05:20.942 00:05:20.942 00:05:20.942 Suite: pci 00:05:20.942 Test: pci_hook ...[2024-07-26 01:41:02.900907] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2133466 has claimed it 00:05:20.942 EAL: Cannot find device (10000:00:01.0) 00:05:20.942 EAL: Failed to attach device on primary process 00:05:20.942 passed 00:05:20.942 00:05:20.942 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.942 suites 1 1 n/a 0 0 00:05:20.942 tests 1 1 1 0 0 00:05:20.942 asserts 25 25 25 0 n/a 00:05:20.942 00:05:20.942 Elapsed time = 0.021 seconds 00:05:20.942 00:05:20.942 real 0m0.033s 00:05:20.942 user 0m0.012s 00:05:20.942 sys 0m0.021s 00:05:20.942 01:41:02 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.942 01:41:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:20.942 ************************************ 00:05:20.942 END TEST env_pci 00:05:20.942 ************************************ 00:05:20.942 01:41:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:20.942 01:41:02 env -- env/env.sh@15 -- # uname 00:05:20.942 01:41:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:20.942 01:41:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:20.942 01:41:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:20.942 01:41:02 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:20.942 01:41:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.942 01:41:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.199 ************************************ 00:05:21.199 START TEST env_dpdk_post_init 00:05:21.199 ************************************ 00:05:21.199 01:41:02 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.199 EAL: Detected CPU lcores: 48 00:05:21.199 EAL: Detected NUMA nodes: 2 00:05:21.199 EAL: Detected shared linkage of DPDK 00:05:21.199 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:21.199 EAL: Selected IOVA mode 'VA' 00:05:21.199 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.199 EAL: VFIO support initialized 00:05:21.199 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:21.199 EAL: Using IOMMU type 1 (Type 1) 00:05:21.199 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:21.199 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:21.199 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:21.199 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:21.199 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:21.199 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:21.199 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:21.199 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:21.199 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:21.199 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:21.199 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:21.457 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:21.457 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:21.457 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:21.457 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:21.457 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:22.020 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:25.389 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:25.389 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:25.389 Starting DPDK initialization... 00:05:25.389 Starting SPDK post initialization... 00:05:25.389 SPDK NVMe probe 00:05:25.389 Attaching to 0000:88:00.0 00:05:25.389 Attached to 0000:88:00.0 00:05:25.389 Cleaning up... 00:05:25.389 00:05:25.389 real 0m4.404s 00:05:25.389 user 0m3.265s 00:05:25.389 sys 0m0.193s 00:05:25.389 01:41:07 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.389 01:41:07 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.389 ************************************ 00:05:25.389 END TEST env_dpdk_post_init 00:05:25.389 ************************************ 00:05:25.389 01:41:07 env -- env/env.sh@26 -- # uname 00:05:25.389 01:41:07 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:25.389 01:41:07 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:25.646 01:41:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.646 01:41:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.646 01:41:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.646 ************************************ 00:05:25.646 START TEST env_mem_callbacks 00:05:25.646 ************************************ 00:05:25.646 01:41:07 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:25.646 EAL: Detected CPU lcores: 48 00:05:25.646 EAL: Detected NUMA nodes: 2 00:05:25.646 EAL: Detected shared linkage of DPDK 00:05:25.646 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:25.646 EAL: Selected IOVA mode 'VA' 00:05:25.646 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.646 EAL: VFIO support initialized 00:05:25.646 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:25.646 00:05:25.646 00:05:25.646 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.646 http://cunit.sourceforge.net/ 00:05:25.646 00:05:25.646 00:05:25.646 Suite: memory 00:05:25.646 Test: test ... 00:05:25.646 register 0x200000200000 2097152 00:05:25.646 malloc 3145728 00:05:25.646 register 0x200000400000 4194304 00:05:25.646 buf 0x200000500000 len 3145728 PASSED 00:05:25.646 malloc 64 00:05:25.646 buf 0x2000004fff40 len 64 PASSED 00:05:25.646 malloc 4194304 00:05:25.646 register 0x200000800000 6291456 00:05:25.646 buf 0x200000a00000 len 4194304 PASSED 00:05:25.646 free 0x200000500000 3145728 00:05:25.646 free 0x2000004fff40 64 00:05:25.646 unregister 0x200000400000 4194304 PASSED 00:05:25.646 free 0x200000a00000 4194304 00:05:25.646 unregister 0x200000800000 6291456 PASSED 00:05:25.646 malloc 8388608 00:05:25.646 register 0x200000400000 10485760 00:05:25.646 buf 0x200000600000 len 8388608 PASSED 00:05:25.646 free 0x200000600000 8388608 00:05:25.646 unregister 0x200000400000 10485760 PASSED 00:05:25.646 passed 00:05:25.646 00:05:25.646 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.646 suites 1 1 n/a 0 0 00:05:25.646 tests 1 1 1 0 0 00:05:25.646 asserts 15 15 15 0 n/a 00:05:25.646 00:05:25.646 Elapsed time = 0.005 seconds 00:05:25.646 00:05:25.646 real 0m0.048s 00:05:25.646 user 0m0.013s 00:05:25.646 sys 0m0.035s 00:05:25.646 01:41:07 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.646 01:41:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:25.646 ************************************ 00:05:25.646 END TEST env_mem_callbacks 00:05:25.646 ************************************ 00:05:25.646 00:05:25.646 real 0m6.409s 00:05:25.646 user 0m4.394s 00:05:25.646 sys 0m1.044s 00:05:25.646 01:41:07 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.646 01:41:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.646 ************************************ 00:05:25.646 END TEST env 00:05:25.646 ************************************ 00:05:25.646 01:41:07 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:25.646 01:41:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.646 01:41:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.646 01:41:07 -- common/autotest_common.sh@10 -- # set +x 00:05:25.646 ************************************ 00:05:25.646 START TEST rpc 00:05:25.647 ************************************ 00:05:25.647 01:41:07 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:25.647 * Looking for test storage... 00:05:25.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:25.647 01:41:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2134123 00:05:25.647 01:41:07 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:25.647 01:41:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.647 01:41:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2134123 00:05:25.647 01:41:07 rpc -- common/autotest_common.sh@831 -- # '[' -z 2134123 ']' 00:05:25.647 01:41:07 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.647 01:41:07 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.647 01:41:07 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.647 01:41:07 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.647 01:41:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.647 [2024-07-26 01:41:07.641248] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:05:25.647 [2024-07-26 01:41:07.641323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2134123 ] 00:05:25.905 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.905 [2024-07-26 01:41:07.698393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.905 [2024-07-26 01:41:07.783576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:25.905 [2024-07-26 01:41:07.783653] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2134123' to capture a snapshot of events at runtime. 00:05:25.905 [2024-07-26 01:41:07.783685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:25.905 [2024-07-26 01:41:07.783697] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:25.905 [2024-07-26 01:41:07.783707] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2134123 for offline analysis/debug. 00:05:25.905 [2024-07-26 01:41:07.783744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.163 01:41:08 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.163 01:41:08 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:26.164 01:41:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.164 01:41:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.164 01:41:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:26.164 01:41:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:26.164 01:41:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.164 01:41:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.164 01:41:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.164 ************************************ 00:05:26.164 START TEST rpc_integrity 00:05:26.164 ************************************ 00:05:26.164 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:26.164 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.164 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.164 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.164 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.164 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.164 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.164 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.164 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.164 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.164 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.164 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.164 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:26.164 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.164 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.164 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.164 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.164 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.164 { 00:05:26.164 "name": "Malloc0", 00:05:26.164 "aliases": [ 00:05:26.164 "46b4cb48-9e2e-4954-a4bd-880d3ea7e50a" 00:05:26.164 ], 00:05:26.164 "product_name": "Malloc disk", 00:05:26.164 "block_size": 512, 00:05:26.164 "num_blocks": 16384, 00:05:26.164 "uuid": "46b4cb48-9e2e-4954-a4bd-880d3ea7e50a", 00:05:26.164 "assigned_rate_limits": { 00:05:26.164 "rw_ios_per_sec": 0, 00:05:26.164 "rw_mbytes_per_sec": 0, 00:05:26.164 "r_mbytes_per_sec": 0, 00:05:26.164 "w_mbytes_per_sec": 0 00:05:26.164 }, 00:05:26.164 "claimed": false, 00:05:26.164 "zoned": false, 00:05:26.164 "supported_io_types": { 00:05:26.164 "read": true, 00:05:26.164 "write": true, 00:05:26.164 "unmap": true, 00:05:26.164 "flush": true, 00:05:26.164 "reset": true, 00:05:26.164 "nvme_admin": false, 00:05:26.164 "nvme_io": false, 00:05:26.164 "nvme_io_md": false, 00:05:26.164 "write_zeroes": true, 00:05:26.164 "zcopy": true, 00:05:26.164 "get_zone_info": false, 00:05:26.164 "zone_management": false, 00:05:26.164 "zone_append": false, 00:05:26.164 "compare": false, 00:05:26.164 "compare_and_write": false, 00:05:26.164 "abort": true, 00:05:26.164 "seek_hole": false, 00:05:26.164 "seek_data": false, 00:05:26.164 "copy": true, 00:05:26.164 "nvme_iov_md": false 00:05:26.164 }, 00:05:26.164 "memory_domains": [ 00:05:26.164 { 00:05:26.164 "dma_device_id": "system", 00:05:26.164 "dma_device_type": 1 00:05:26.164 }, 00:05:26.164 { 00:05:26.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.164 "dma_device_type": 2 00:05:26.164 } 00:05:26.164 ], 00:05:26.164 "driver_specific": {} 00:05:26.164 } 00:05:26.164 ]' 00:05:26.164 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.164 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.164 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:26.164 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.164 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.164 [2024-07-26 01:41:08.169485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:26.164 [2024-07-26 01:41:08.169535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.164 [2024-07-26 01:41:08.169562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6cabb0 00:05:26.164 [2024-07-26 01:41:08.169578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.164 [2024-07-26 01:41:08.171119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.164 [2024-07-26 01:41:08.171143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.164 Passthru0 00:05:26.164 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.164 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.164 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.164 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.434 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.434 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.434 { 00:05:26.434 "name": "Malloc0", 00:05:26.434 "aliases": [ 00:05:26.434 "46b4cb48-9e2e-4954-a4bd-880d3ea7e50a" 00:05:26.434 ], 00:05:26.434 "product_name": "Malloc disk", 00:05:26.434 "block_size": 512, 00:05:26.434 "num_blocks": 16384, 00:05:26.434 "uuid": "46b4cb48-9e2e-4954-a4bd-880d3ea7e50a", 00:05:26.434 "assigned_rate_limits": { 00:05:26.434 "rw_ios_per_sec": 0, 00:05:26.434 "rw_mbytes_per_sec": 0, 00:05:26.434 "r_mbytes_per_sec": 0, 00:05:26.434 "w_mbytes_per_sec": 0 00:05:26.434 }, 00:05:26.434 "claimed": true, 00:05:26.434 "claim_type": "exclusive_write", 00:05:26.434 "zoned": false, 00:05:26.434 "supported_io_types": { 00:05:26.434 "read": true, 00:05:26.434 "write": true, 00:05:26.434 "unmap": true, 00:05:26.434 "flush": true, 00:05:26.434 "reset": true, 00:05:26.434 "nvme_admin": false, 00:05:26.434 "nvme_io": false, 00:05:26.434 "nvme_io_md": false, 00:05:26.434 "write_zeroes": true, 00:05:26.434 "zcopy": true, 00:05:26.434 "get_zone_info": false, 00:05:26.434 "zone_management": false, 00:05:26.434 "zone_append": false, 00:05:26.434 "compare": false, 00:05:26.434 "compare_and_write": false, 00:05:26.434 "abort": true, 00:05:26.434 "seek_hole": false, 00:05:26.434 "seek_data": false, 00:05:26.434 "copy": true, 00:05:26.434 "nvme_iov_md": false 00:05:26.434 }, 00:05:26.434 "memory_domains": [ 00:05:26.434 { 00:05:26.434 "dma_device_id": "system", 00:05:26.434 "dma_device_type": 1 00:05:26.434 }, 00:05:26.434 { 00:05:26.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.434 "dma_device_type": 2 00:05:26.434 } 00:05:26.434 ], 00:05:26.434 "driver_specific": {} 00:05:26.434 }, 00:05:26.434 { 00:05:26.434 "name": "Passthru0", 00:05:26.434 "aliases": [ 00:05:26.434 "21d7713b-ef0d-5694-88a4-2fc01ec5efca" 00:05:26.434 ], 00:05:26.434 "product_name": "passthru", 00:05:26.434 "block_size": 512, 00:05:26.434 "num_blocks": 16384, 00:05:26.434 "uuid": "21d7713b-ef0d-5694-88a4-2fc01ec5efca", 00:05:26.434 "assigned_rate_limits": { 00:05:26.434 "rw_ios_per_sec": 0, 00:05:26.434 "rw_mbytes_per_sec": 0, 00:05:26.434 "r_mbytes_per_sec": 0, 00:05:26.434 "w_mbytes_per_sec": 0 00:05:26.434 }, 00:05:26.434 "claimed": false, 00:05:26.434 "zoned": false, 00:05:26.434 "supported_io_types": { 00:05:26.434 "read": true, 00:05:26.434 "write": true, 00:05:26.434 "unmap": true, 00:05:26.434 "flush": true, 00:05:26.434 "reset": true, 00:05:26.434 "nvme_admin": false, 00:05:26.434 "nvme_io": false, 00:05:26.434 "nvme_io_md": false, 00:05:26.434 "write_zeroes": true, 00:05:26.434 "zcopy": true, 00:05:26.434 "get_zone_info": false, 00:05:26.434 "zone_management": false, 00:05:26.434 "zone_append": false, 00:05:26.434 "compare": false, 00:05:26.434 "compare_and_write": false, 00:05:26.434 "abort": true, 00:05:26.434 "seek_hole": false, 00:05:26.434 "seek_data": false, 00:05:26.434 "copy": true, 00:05:26.434 "nvme_iov_md": false 00:05:26.434 }, 00:05:26.434 "memory_domains": [ 00:05:26.434 { 00:05:26.434 "dma_device_id": "system", 00:05:26.434 "dma_device_type": 1 00:05:26.434 }, 00:05:26.434 { 00:05:26.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.434 "dma_device_type": 2 00:05:26.434 } 00:05:26.434 ], 00:05:26.434 "driver_specific": { 00:05:26.434 "passthru": { 00:05:26.434 "name": "Passthru0", 00:05:26.434 "base_bdev_name": "Malloc0" 00:05:26.434 } 00:05:26.434 } 00:05:26.434 } 00:05:26.434 ]' 00:05:26.434 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:26.434 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.434 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.434 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.434 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.434 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.434 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:26.434 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.434 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.434 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.434 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.434 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.434 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.434 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.434 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.434 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:26.434 01:41:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.434 00:05:26.434 real 0m0.227s 00:05:26.434 user 0m0.154s 00:05:26.434 sys 0m0.015s 00:05:26.434 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.434 01:41:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.434 ************************************ 00:05:26.434 END TEST rpc_integrity 00:05:26.434 ************************************ 00:05:26.434 01:41:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:26.434 01:41:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.434 01:41:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.434 01:41:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.434 ************************************ 00:05:26.434 START TEST rpc_plugins 00:05:26.434 ************************************ 00:05:26.434 01:41:08 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:26.434 01:41:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:26.434 01:41:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.434 01:41:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.434 01:41:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.434 01:41:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:26.434 01:41:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:26.434 01:41:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.434 01:41:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.434 01:41:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.434 01:41:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:26.435 { 00:05:26.435 "name": "Malloc1", 00:05:26.435 "aliases": [ 00:05:26.435 "e2be8a0a-1421-4ba5-920e-85556047799d" 00:05:26.435 ], 00:05:26.435 "product_name": "Malloc disk", 00:05:26.435 "block_size": 4096, 00:05:26.435 "num_blocks": 256, 00:05:26.435 "uuid": "e2be8a0a-1421-4ba5-920e-85556047799d", 00:05:26.435 "assigned_rate_limits": { 00:05:26.435 "rw_ios_per_sec": 0, 00:05:26.435 "rw_mbytes_per_sec": 0, 00:05:26.435 "r_mbytes_per_sec": 0, 00:05:26.435 "w_mbytes_per_sec": 0 00:05:26.435 }, 00:05:26.435 "claimed": false, 00:05:26.435 "zoned": false, 00:05:26.435 "supported_io_types": { 00:05:26.435 "read": true, 00:05:26.435 "write": true, 00:05:26.435 "unmap": true, 00:05:26.435 "flush": true, 00:05:26.435 "reset": true, 00:05:26.435 "nvme_admin": false, 00:05:26.435 "nvme_io": false, 00:05:26.435 "nvme_io_md": false, 00:05:26.435 "write_zeroes": true, 00:05:26.435 "zcopy": true, 00:05:26.435 "get_zone_info": false, 00:05:26.435 "zone_management": false, 00:05:26.435 "zone_append": false, 00:05:26.435 "compare": false, 00:05:26.435 "compare_and_write": false, 00:05:26.435 "abort": true, 00:05:26.435 "seek_hole": false, 00:05:26.435 "seek_data": false, 00:05:26.435 "copy": true, 00:05:26.435 "nvme_iov_md": false 00:05:26.435 }, 00:05:26.435 "memory_domains": [ 00:05:26.435 { 00:05:26.435 "dma_device_id": "system", 00:05:26.435 "dma_device_type": 1 00:05:26.435 }, 00:05:26.435 { 00:05:26.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.435 "dma_device_type": 2 00:05:26.435 } 00:05:26.435 ], 00:05:26.435 "driver_specific": {} 00:05:26.435 } 00:05:26.435 ]' 00:05:26.435 01:41:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:26.435 01:41:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:26.435 01:41:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:26.435 01:41:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.435 01:41:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.435 01:41:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.435 01:41:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:26.435 01:41:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.435 01:41:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.435 01:41:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.435 01:41:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:26.435 01:41:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:26.435 01:41:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:26.435 00:05:26.435 real 0m0.110s 00:05:26.435 user 0m0.073s 00:05:26.435 sys 0m0.009s 00:05:26.435 01:41:08 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.435 01:41:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.435 ************************************ 00:05:26.435 END TEST rpc_plugins 00:05:26.435 ************************************ 00:05:26.693 01:41:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:26.693 01:41:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.693 01:41:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.693 01:41:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.693 ************************************ 00:05:26.693 START TEST rpc_trace_cmd_test 00:05:26.693 ************************************ 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:26.693 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2134123", 00:05:26.693 "tpoint_group_mask": "0x8", 00:05:26.693 "iscsi_conn": { 00:05:26.693 "mask": "0x2", 00:05:26.693 "tpoint_mask": "0x0" 00:05:26.693 }, 00:05:26.693 "scsi": { 00:05:26.693 "mask": "0x4", 00:05:26.693 "tpoint_mask": "0x0" 00:05:26.693 }, 00:05:26.693 "bdev": { 00:05:26.693 "mask": "0x8", 00:05:26.693 "tpoint_mask": "0xffffffffffffffff" 00:05:26.693 }, 00:05:26.693 "nvmf_rdma": { 00:05:26.693 "mask": "0x10", 00:05:26.693 "tpoint_mask": "0x0" 00:05:26.693 }, 00:05:26.693 "nvmf_tcp": { 00:05:26.693 "mask": "0x20", 00:05:26.693 "tpoint_mask": "0x0" 00:05:26.693 }, 00:05:26.693 "ftl": { 00:05:26.693 "mask": "0x40", 00:05:26.693 "tpoint_mask": "0x0" 00:05:26.693 }, 00:05:26.693 "blobfs": { 00:05:26.693 "mask": "0x80", 00:05:26.693 "tpoint_mask": "0x0" 00:05:26.693 }, 00:05:26.693 "dsa": { 00:05:26.693 "mask": "0x200", 00:05:26.693 "tpoint_mask": "0x0" 00:05:26.693 }, 00:05:26.693 "thread": { 00:05:26.693 "mask": "0x400", 00:05:26.693 "tpoint_mask": "0x0" 00:05:26.693 }, 00:05:26.693 "nvme_pcie": { 00:05:26.693 "mask": "0x800", 00:05:26.693 "tpoint_mask": "0x0" 00:05:26.693 }, 00:05:26.693 "iaa": { 00:05:26.693 "mask": "0x1000", 00:05:26.693 "tpoint_mask": "0x0" 00:05:26.693 }, 00:05:26.693 "nvme_tcp": { 00:05:26.693 "mask": "0x2000", 00:05:26.693 "tpoint_mask": "0x0" 00:05:26.693 }, 00:05:26.693 "bdev_nvme": { 00:05:26.693 "mask": "0x4000", 00:05:26.693 "tpoint_mask": "0x0" 00:05:26.693 }, 00:05:26.693 "sock": { 00:05:26.693 "mask": "0x8000", 00:05:26.693 "tpoint_mask": "0x0" 00:05:26.693 } 00:05:26.693 }' 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:26.693 00:05:26.693 real 0m0.197s 00:05:26.693 user 0m0.179s 00:05:26.693 sys 0m0.012s 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.693 01:41:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.693 ************************************ 00:05:26.693 END TEST rpc_trace_cmd_test 00:05:26.693 ************************************ 00:05:26.952 01:41:08 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:26.952 01:41:08 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:26.952 01:41:08 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:26.952 01:41:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.952 01:41:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.952 01:41:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.952 ************************************ 00:05:26.952 START TEST rpc_daemon_integrity 00:05:26.952 ************************************ 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.952 { 00:05:26.952 "name": "Malloc2", 00:05:26.952 "aliases": [ 00:05:26.952 "4019b20f-dead-4691-b71d-cb691eadf77e" 00:05:26.952 ], 00:05:26.952 "product_name": "Malloc disk", 00:05:26.952 "block_size": 512, 00:05:26.952 "num_blocks": 16384, 00:05:26.952 "uuid": "4019b20f-dead-4691-b71d-cb691eadf77e", 00:05:26.952 "assigned_rate_limits": { 00:05:26.952 "rw_ios_per_sec": 0, 00:05:26.952 "rw_mbytes_per_sec": 0, 00:05:26.952 "r_mbytes_per_sec": 0, 00:05:26.952 "w_mbytes_per_sec": 0 00:05:26.952 }, 00:05:26.952 "claimed": false, 00:05:26.952 "zoned": false, 00:05:26.952 "supported_io_types": { 00:05:26.952 "read": true, 00:05:26.952 "write": true, 00:05:26.952 "unmap": true, 00:05:26.952 "flush": true, 00:05:26.952 "reset": true, 00:05:26.952 "nvme_admin": false, 00:05:26.952 "nvme_io": false, 00:05:26.952 "nvme_io_md": false, 00:05:26.952 "write_zeroes": true, 00:05:26.952 "zcopy": true, 00:05:26.952 "get_zone_info": false, 00:05:26.952 "zone_management": false, 00:05:26.952 "zone_append": false, 00:05:26.952 "compare": false, 00:05:26.952 "compare_and_write": false, 00:05:26.952 "abort": true, 00:05:26.952 "seek_hole": false, 00:05:26.952 "seek_data": false, 00:05:26.952 "copy": true, 00:05:26.952 "nvme_iov_md": false 00:05:26.952 }, 00:05:26.952 "memory_domains": [ 00:05:26.952 { 00:05:26.952 "dma_device_id": "system", 00:05:26.952 "dma_device_type": 1 00:05:26.952 }, 00:05:26.952 { 00:05:26.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.952 "dma_device_type": 2 00:05:26.952 } 00:05:26.952 ], 00:05:26.952 "driver_specific": {} 00:05:26.952 } 00:05:26.952 ]' 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.952 [2024-07-26 01:41:08.835511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:26.952 [2024-07-26 01:41:08.835560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.952 [2024-07-26 01:41:08.835591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6cb6f0 00:05:26.952 [2024-07-26 01:41:08.835608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.952 [2024-07-26 01:41:08.836972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.952 [2024-07-26 01:41:08.837000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.952 Passthru0 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.952 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.952 { 00:05:26.952 "name": "Malloc2", 00:05:26.952 "aliases": [ 00:05:26.952 "4019b20f-dead-4691-b71d-cb691eadf77e" 00:05:26.952 ], 00:05:26.952 "product_name": "Malloc disk", 00:05:26.952 "block_size": 512, 00:05:26.952 "num_blocks": 16384, 00:05:26.952 "uuid": "4019b20f-dead-4691-b71d-cb691eadf77e", 00:05:26.952 "assigned_rate_limits": { 00:05:26.952 "rw_ios_per_sec": 0, 00:05:26.952 "rw_mbytes_per_sec": 0, 00:05:26.952 "r_mbytes_per_sec": 0, 00:05:26.952 "w_mbytes_per_sec": 0 00:05:26.952 }, 00:05:26.952 "claimed": true, 00:05:26.952 "claim_type": "exclusive_write", 00:05:26.952 "zoned": false, 00:05:26.952 "supported_io_types": { 00:05:26.952 "read": true, 00:05:26.952 "write": true, 00:05:26.952 "unmap": true, 00:05:26.952 "flush": true, 00:05:26.952 "reset": true, 00:05:26.952 "nvme_admin": false, 00:05:26.952 "nvme_io": false, 00:05:26.952 "nvme_io_md": false, 00:05:26.952 "write_zeroes": true, 00:05:26.952 "zcopy": true, 00:05:26.952 "get_zone_info": false, 00:05:26.952 "zone_management": false, 00:05:26.952 "zone_append": false, 00:05:26.952 "compare": false, 00:05:26.952 "compare_and_write": false, 00:05:26.952 "abort": true, 00:05:26.952 "seek_hole": false, 00:05:26.952 "seek_data": false, 00:05:26.952 "copy": true, 00:05:26.952 "nvme_iov_md": false 00:05:26.952 }, 00:05:26.952 "memory_domains": [ 00:05:26.952 { 00:05:26.952 "dma_device_id": "system", 00:05:26.952 "dma_device_type": 1 00:05:26.952 }, 00:05:26.952 { 00:05:26.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.952 "dma_device_type": 2 00:05:26.952 } 00:05:26.952 ], 00:05:26.952 "driver_specific": {} 00:05:26.952 }, 00:05:26.952 { 00:05:26.953 "name": "Passthru0", 00:05:26.953 "aliases": [ 00:05:26.953 "f84b118b-9215-5bfe-9721-9f8917bab3f9" 00:05:26.953 ], 00:05:26.953 "product_name": "passthru", 00:05:26.953 "block_size": 512, 00:05:26.953 "num_blocks": 16384, 00:05:26.953 "uuid": "f84b118b-9215-5bfe-9721-9f8917bab3f9", 00:05:26.953 "assigned_rate_limits": { 00:05:26.953 "rw_ios_per_sec": 0, 00:05:26.953 "rw_mbytes_per_sec": 0, 00:05:26.953 "r_mbytes_per_sec": 0, 00:05:26.953 "w_mbytes_per_sec": 0 00:05:26.953 }, 00:05:26.953 "claimed": false, 00:05:26.953 "zoned": false, 00:05:26.953 "supported_io_types": { 00:05:26.953 "read": true, 00:05:26.953 "write": true, 00:05:26.953 "unmap": true, 00:05:26.953 "flush": true, 00:05:26.953 "reset": true, 00:05:26.953 "nvme_admin": false, 00:05:26.953 "nvme_io": false, 00:05:26.953 "nvme_io_md": false, 00:05:26.953 "write_zeroes": true, 00:05:26.953 "zcopy": true, 00:05:26.953 "get_zone_info": false, 00:05:26.953 "zone_management": false, 00:05:26.953 "zone_append": false, 00:05:26.953 "compare": false, 00:05:26.953 "compare_and_write": false, 00:05:26.953 "abort": true, 00:05:26.953 "seek_hole": false, 00:05:26.953 "seek_data": false, 00:05:26.953 "copy": true, 00:05:26.953 "nvme_iov_md": false 00:05:26.953 }, 00:05:26.953 "memory_domains": [ 00:05:26.953 { 00:05:26.953 "dma_device_id": "system", 00:05:26.953 "dma_device_type": 1 00:05:26.953 }, 00:05:26.953 { 00:05:26.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.953 "dma_device_type": 2 00:05:26.953 } 00:05:26.953 ], 00:05:26.953 "driver_specific": { 00:05:26.953 "passthru": { 00:05:26.953 "name": "Passthru0", 00:05:26.953 "base_bdev_name": "Malloc2" 00:05:26.953 } 00:05:26.953 } 00:05:26.953 } 00:05:26.953 ]' 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.953 00:05:26.953 real 0m0.227s 00:05:26.953 user 0m0.157s 00:05:26.953 sys 0m0.015s 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.953 01:41:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.953 ************************************ 00:05:26.953 END TEST rpc_daemon_integrity 00:05:26.953 ************************************ 00:05:27.211 01:41:08 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:27.211 01:41:08 rpc -- rpc/rpc.sh@84 -- # killprocess 2134123 00:05:27.211 01:41:08 rpc -- common/autotest_common.sh@950 -- # '[' -z 2134123 ']' 00:05:27.211 01:41:08 rpc -- common/autotest_common.sh@954 -- # kill -0 2134123 00:05:27.211 01:41:08 rpc -- common/autotest_common.sh@955 -- # uname 00:05:27.211 01:41:08 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.211 01:41:08 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2134123 00:05:27.211 01:41:09 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.211 01:41:09 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.211 01:41:09 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2134123' 00:05:27.211 killing process with pid 2134123 00:05:27.211 01:41:09 rpc -- common/autotest_common.sh@969 -- # kill 2134123 00:05:27.211 01:41:09 rpc -- common/autotest_common.sh@974 -- # wait 2134123 00:05:27.469 00:05:27.469 real 0m1.860s 00:05:27.469 user 0m2.372s 00:05:27.469 sys 0m0.555s 00:05:27.469 01:41:09 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.469 01:41:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.469 ************************************ 00:05:27.469 END TEST rpc 00:05:27.469 ************************************ 00:05:27.469 01:41:09 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:27.469 01:41:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.469 01:41:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.469 01:41:09 -- common/autotest_common.sh@10 -- # set +x 00:05:27.469 ************************************ 00:05:27.469 START TEST skip_rpc 00:05:27.469 ************************************ 00:05:27.469 01:41:09 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:27.728 * Looking for test storage... 00:05:27.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:27.728 01:41:09 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:27.728 01:41:09 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:27.728 01:41:09 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:27.728 01:41:09 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.728 01:41:09 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.728 01:41:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.728 ************************************ 00:05:27.728 START TEST skip_rpc 00:05:27.728 ************************************ 00:05:27.728 01:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:27.728 01:41:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2134562 00:05:27.728 01:41:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:27.728 01:41:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.728 01:41:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:27.728 [2024-07-26 01:41:09.572915] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:05:27.728 [2024-07-26 01:41:09.572978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2134562 ] 00:05:27.728 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.728 [2024-07-26 01:41:09.634113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.728 [2024-07-26 01:41:09.727931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2134562 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2134562 ']' 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2134562 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2134562 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2134562' 00:05:32.988 killing process with pid 2134562 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2134562 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2134562 00:05:32.988 00:05:32.988 real 0m5.433s 00:05:32.988 user 0m5.109s 00:05:32.988 sys 0m0.330s 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.988 01:41:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.988 ************************************ 00:05:32.989 END TEST skip_rpc 00:05:32.989 ************************************ 00:05:32.989 01:41:14 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:32.989 01:41:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.989 01:41:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.989 01:41:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.247 ************************************ 00:05:33.247 START TEST skip_rpc_with_json 00:05:33.247 ************************************ 00:05:33.247 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:33.247 01:41:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:33.247 01:41:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2135249 00:05:33.247 01:41:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.247 01:41:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.247 01:41:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2135249 00:05:33.247 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2135249 ']' 00:05:33.247 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.247 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.247 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.247 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.247 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.247 [2024-07-26 01:41:15.060697] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:05:33.247 [2024-07-26 01:41:15.060800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2135249 ] 00:05:33.247 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.247 [2024-07-26 01:41:15.119001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.247 [2024-07-26 01:41:15.207808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.506 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.506 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:33.506 01:41:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:33.506 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.506 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.506 [2024-07-26 01:41:15.466169] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:33.506 request: 00:05:33.506 { 00:05:33.506 "trtype": "tcp", 00:05:33.506 "method": "nvmf_get_transports", 00:05:33.506 "req_id": 1 00:05:33.506 } 00:05:33.506 Got JSON-RPC error response 00:05:33.506 response: 00:05:33.506 { 00:05:33.506 "code": -19, 00:05:33.506 "message": "No such device" 00:05:33.506 } 00:05:33.506 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:33.506 01:41:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:33.506 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.506 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.506 [2024-07-26 01:41:15.474279] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.506 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.506 01:41:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:33.506 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.506 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.764 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.764 01:41:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:33.764 { 00:05:33.764 "subsystems": [ 00:05:33.764 { 00:05:33.764 "subsystem": "vfio_user_target", 00:05:33.764 "config": null 00:05:33.764 }, 00:05:33.764 { 00:05:33.764 "subsystem": "keyring", 00:05:33.764 "config": [] 00:05:33.764 }, 00:05:33.764 { 00:05:33.764 "subsystem": "iobuf", 00:05:33.764 "config": [ 00:05:33.764 { 00:05:33.764 "method": "iobuf_set_options", 00:05:33.764 "params": { 00:05:33.764 "small_pool_count": 8192, 00:05:33.764 "large_pool_count": 1024, 00:05:33.764 "small_bufsize": 8192, 00:05:33.764 "large_bufsize": 135168 00:05:33.764 } 00:05:33.764 } 00:05:33.764 ] 00:05:33.764 }, 00:05:33.764 { 00:05:33.764 "subsystem": "sock", 00:05:33.764 "config": [ 00:05:33.764 { 00:05:33.764 "method": "sock_set_default_impl", 00:05:33.764 "params": { 00:05:33.764 "impl_name": "posix" 00:05:33.764 } 00:05:33.764 }, 00:05:33.764 { 00:05:33.764 "method": "sock_impl_set_options", 00:05:33.764 "params": { 00:05:33.764 "impl_name": "ssl", 00:05:33.764 "recv_buf_size": 4096, 00:05:33.764 "send_buf_size": 4096, 00:05:33.764 "enable_recv_pipe": true, 00:05:33.764 "enable_quickack": false, 00:05:33.764 "enable_placement_id": 0, 00:05:33.764 "enable_zerocopy_send_server": true, 00:05:33.764 "enable_zerocopy_send_client": false, 00:05:33.764 "zerocopy_threshold": 0, 00:05:33.764 "tls_version": 0, 00:05:33.764 "enable_ktls": false 00:05:33.764 } 00:05:33.764 }, 00:05:33.764 { 00:05:33.764 "method": "sock_impl_set_options", 00:05:33.764 "params": { 00:05:33.764 "impl_name": "posix", 00:05:33.764 "recv_buf_size": 2097152, 00:05:33.764 "send_buf_size": 2097152, 00:05:33.764 "enable_recv_pipe": true, 00:05:33.764 "enable_quickack": false, 00:05:33.764 "enable_placement_id": 0, 00:05:33.764 "enable_zerocopy_send_server": true, 00:05:33.764 "enable_zerocopy_send_client": false, 00:05:33.764 "zerocopy_threshold": 0, 00:05:33.764 "tls_version": 0, 00:05:33.764 "enable_ktls": false 00:05:33.764 } 00:05:33.764 } 00:05:33.764 ] 00:05:33.764 }, 00:05:33.764 { 00:05:33.764 "subsystem": "vmd", 00:05:33.764 "config": [] 00:05:33.764 }, 00:05:33.764 { 00:05:33.764 "subsystem": "accel", 00:05:33.764 "config": [ 00:05:33.764 { 00:05:33.764 "method": "accel_set_options", 00:05:33.764 "params": { 00:05:33.764 "small_cache_size": 128, 00:05:33.764 "large_cache_size": 16, 00:05:33.764 "task_count": 2048, 00:05:33.764 "sequence_count": 2048, 00:05:33.764 "buf_count": 2048 00:05:33.764 } 00:05:33.764 } 00:05:33.764 ] 00:05:33.764 }, 00:05:33.764 { 00:05:33.764 "subsystem": "bdev", 00:05:33.764 "config": [ 00:05:33.764 { 00:05:33.764 "method": "bdev_set_options", 00:05:33.764 "params": { 00:05:33.764 "bdev_io_pool_size": 65535, 00:05:33.764 "bdev_io_cache_size": 256, 00:05:33.764 "bdev_auto_examine": true, 00:05:33.764 "iobuf_small_cache_size": 128, 00:05:33.764 "iobuf_large_cache_size": 16 00:05:33.764 } 00:05:33.764 }, 00:05:33.764 { 00:05:33.764 "method": "bdev_raid_set_options", 00:05:33.764 "params": { 00:05:33.764 "process_window_size_kb": 1024, 00:05:33.764 "process_max_bandwidth_mb_sec": 0 00:05:33.764 } 00:05:33.764 }, 00:05:33.764 { 00:05:33.764 "method": "bdev_iscsi_set_options", 00:05:33.764 "params": { 00:05:33.764 "timeout_sec": 30 00:05:33.764 } 00:05:33.764 }, 00:05:33.764 { 00:05:33.764 "method": "bdev_nvme_set_options", 00:05:33.764 "params": { 00:05:33.764 "action_on_timeout": "none", 00:05:33.764 "timeout_us": 0, 00:05:33.764 "timeout_admin_us": 0, 00:05:33.764 "keep_alive_timeout_ms": 10000, 00:05:33.764 "arbitration_burst": 0, 00:05:33.764 "low_priority_weight": 0, 00:05:33.764 "medium_priority_weight": 0, 00:05:33.764 "high_priority_weight": 0, 00:05:33.765 "nvme_adminq_poll_period_us": 10000, 00:05:33.765 "nvme_ioq_poll_period_us": 0, 00:05:33.765 "io_queue_requests": 0, 00:05:33.765 "delay_cmd_submit": true, 00:05:33.765 "transport_retry_count": 4, 00:05:33.765 "bdev_retry_count": 3, 00:05:33.765 "transport_ack_timeout": 0, 00:05:33.765 "ctrlr_loss_timeout_sec": 0, 00:05:33.765 "reconnect_delay_sec": 0, 00:05:33.765 "fast_io_fail_timeout_sec": 0, 00:05:33.765 "disable_auto_failback": false, 00:05:33.765 "generate_uuids": false, 00:05:33.765 "transport_tos": 0, 00:05:33.765 "nvme_error_stat": false, 00:05:33.765 "rdma_srq_size": 0, 00:05:33.765 "io_path_stat": false, 00:05:33.765 "allow_accel_sequence": false, 00:05:33.765 "rdma_max_cq_size": 0, 00:05:33.765 "rdma_cm_event_timeout_ms": 0, 00:05:33.765 "dhchap_digests": [ 00:05:33.765 "sha256", 00:05:33.765 "sha384", 00:05:33.765 "sha512" 00:05:33.765 ], 00:05:33.765 "dhchap_dhgroups": [ 00:05:33.765 "null", 00:05:33.765 "ffdhe2048", 00:05:33.765 "ffdhe3072", 00:05:33.765 "ffdhe4096", 00:05:33.765 "ffdhe6144", 00:05:33.765 "ffdhe8192" 00:05:33.765 ] 00:05:33.765 } 00:05:33.765 }, 00:05:33.765 { 00:05:33.765 "method": "bdev_nvme_set_hotplug", 00:05:33.765 "params": { 00:05:33.765 "period_us": 100000, 00:05:33.765 "enable": false 00:05:33.765 } 00:05:33.765 }, 00:05:33.765 { 00:05:33.765 "method": "bdev_wait_for_examine" 00:05:33.765 } 00:05:33.765 ] 00:05:33.765 }, 00:05:33.765 { 00:05:33.765 "subsystem": "scsi", 00:05:33.765 "config": null 00:05:33.765 }, 00:05:33.765 { 00:05:33.765 "subsystem": "scheduler", 00:05:33.765 "config": [ 00:05:33.765 { 00:05:33.765 "method": "framework_set_scheduler", 00:05:33.765 "params": { 00:05:33.765 "name": "static" 00:05:33.765 } 00:05:33.765 } 00:05:33.765 ] 00:05:33.765 }, 00:05:33.765 { 00:05:33.765 "subsystem": "vhost_scsi", 00:05:33.765 "config": [] 00:05:33.765 }, 00:05:33.765 { 00:05:33.765 "subsystem": "vhost_blk", 00:05:33.765 "config": [] 00:05:33.765 }, 00:05:33.765 { 00:05:33.765 "subsystem": "ublk", 00:05:33.765 "config": [] 00:05:33.765 }, 00:05:33.765 { 00:05:33.765 "subsystem": "nbd", 00:05:33.765 "config": [] 00:05:33.765 }, 00:05:33.765 { 00:05:33.765 "subsystem": "nvmf", 00:05:33.765 "config": [ 00:05:33.765 { 00:05:33.765 "method": "nvmf_set_config", 00:05:33.765 "params": { 00:05:33.765 "discovery_filter": "match_any", 00:05:33.765 "admin_cmd_passthru": { 00:05:33.765 "identify_ctrlr": false 00:05:33.765 } 00:05:33.765 } 00:05:33.765 }, 00:05:33.765 { 00:05:33.765 "method": "nvmf_set_max_subsystems", 00:05:33.765 "params": { 00:05:33.765 "max_subsystems": 1024 00:05:33.765 } 00:05:33.765 }, 00:05:33.765 { 00:05:33.765 "method": "nvmf_set_crdt", 00:05:33.765 "params": { 00:05:33.765 "crdt1": 0, 00:05:33.765 "crdt2": 0, 00:05:33.765 "crdt3": 0 00:05:33.765 } 00:05:33.765 }, 00:05:33.765 { 00:05:33.765 "method": "nvmf_create_transport", 00:05:33.765 "params": { 00:05:33.765 "trtype": "TCP", 00:05:33.765 "max_queue_depth": 128, 00:05:33.765 "max_io_qpairs_per_ctrlr": 127, 00:05:33.765 "in_capsule_data_size": 4096, 00:05:33.765 "max_io_size": 131072, 00:05:33.765 "io_unit_size": 131072, 00:05:33.765 "max_aq_depth": 128, 00:05:33.765 "num_shared_buffers": 511, 00:05:33.765 "buf_cache_size": 4294967295, 00:05:33.765 "dif_insert_or_strip": false, 00:05:33.765 "zcopy": false, 00:05:33.765 "c2h_success": true, 00:05:33.765 "sock_priority": 0, 00:05:33.765 "abort_timeout_sec": 1, 00:05:33.765 "ack_timeout": 0, 00:05:33.765 "data_wr_pool_size": 0 00:05:33.765 } 00:05:33.765 } 00:05:33.765 ] 00:05:33.765 }, 00:05:33.765 { 00:05:33.765 "subsystem": "iscsi", 00:05:33.765 "config": [ 00:05:33.765 { 00:05:33.765 "method": "iscsi_set_options", 00:05:33.765 "params": { 00:05:33.765 "node_base": "iqn.2016-06.io.spdk", 00:05:33.765 "max_sessions": 128, 00:05:33.765 "max_connections_per_session": 2, 00:05:33.765 "max_queue_depth": 64, 00:05:33.765 "default_time2wait": 2, 00:05:33.765 "default_time2retain": 20, 00:05:33.765 "first_burst_length": 8192, 00:05:33.765 "immediate_data": true, 00:05:33.765 "allow_duplicated_isid": false, 00:05:33.765 "error_recovery_level": 0, 00:05:33.765 "nop_timeout": 60, 00:05:33.765 "nop_in_interval": 30, 00:05:33.765 "disable_chap": false, 00:05:33.765 "require_chap": false, 00:05:33.765 "mutual_chap": false, 00:05:33.765 "chap_group": 0, 00:05:33.765 "max_large_datain_per_connection": 64, 00:05:33.765 "max_r2t_per_connection": 4, 00:05:33.765 "pdu_pool_size": 36864, 00:05:33.765 "immediate_data_pool_size": 16384, 00:05:33.765 "data_out_pool_size": 2048 00:05:33.765 } 00:05:33.765 } 00:05:33.765 ] 00:05:33.765 } 00:05:33.765 ] 00:05:33.765 } 00:05:33.765 01:41:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:33.765 01:41:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2135249 00:05:33.765 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2135249 ']' 00:05:33.765 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2135249 00:05:33.765 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:33.765 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.765 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2135249 00:05:33.765 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.765 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.765 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2135249' 00:05:33.765 killing process with pid 2135249 00:05:33.765 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2135249 00:05:33.765 01:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2135249 00:05:34.331 01:41:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2135388 00:05:34.331 01:41:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:34.331 01:41:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2135388 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2135388 ']' 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2135388 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2135388 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2135388' 00:05:39.595 killing process with pid 2135388 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2135388 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2135388 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:39.595 00:05:39.595 real 0m6.473s 00:05:39.595 user 0m6.063s 00:05:39.595 sys 0m0.693s 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.595 ************************************ 00:05:39.595 END TEST skip_rpc_with_json 00:05:39.595 ************************************ 00:05:39.595 01:41:21 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:39.595 01:41:21 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.595 01:41:21 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.595 01:41:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.595 ************************************ 00:05:39.595 START TEST skip_rpc_with_delay 00:05:39.595 ************************************ 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.595 [2024-07-26 01:41:21.579740] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:39.595 [2024-07-26 01:41:21.579858] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.595 00:05:39.595 real 0m0.069s 00:05:39.595 user 0m0.043s 00:05:39.595 sys 0m0.026s 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.595 01:41:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:39.595 ************************************ 00:05:39.595 END TEST skip_rpc_with_delay 00:05:39.595 ************************************ 00:05:39.853 01:41:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:39.853 01:41:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:39.853 01:41:21 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:39.853 01:41:21 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.853 01:41:21 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.853 01:41:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.853 ************************************ 00:05:39.853 START TEST exit_on_failed_rpc_init 00:05:39.853 ************************************ 00:05:39.853 01:41:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:39.853 01:41:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2136104 00:05:39.853 01:41:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.853 01:41:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2136104 00:05:39.853 01:41:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2136104 ']' 00:05:39.854 01:41:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.854 01:41:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.854 01:41:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.854 01:41:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.854 01:41:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:39.854 [2024-07-26 01:41:21.692001] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:05:39.854 [2024-07-26 01:41:21.692102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2136104 ] 00:05:39.854 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.854 [2024-07-26 01:41:21.754893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.854 [2024-07-26 01:41:21.850561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.112 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.112 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:40.112 01:41:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.112 01:41:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.112 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:40.112 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.112 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.112 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.112 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.112 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.112 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.112 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.112 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.112 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:40.112 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.370 [2024-07-26 01:41:22.162013] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:05:40.370 [2024-07-26 01:41:22.162129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2136113 ] 00:05:40.370 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.370 [2024-07-26 01:41:22.223536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.370 [2024-07-26 01:41:22.317460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.370 [2024-07-26 01:41:22.317601] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:40.370 [2024-07-26 01:41:22.317620] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:40.370 [2024-07-26 01:41:22.317631] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.628 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:40.628 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.628 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:40.629 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:40.629 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:40.629 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.629 01:41:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:40.629 01:41:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2136104 00:05:40.629 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2136104 ']' 00:05:40.629 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2136104 00:05:40.629 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:40.629 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.629 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2136104 00:05:40.629 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.629 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.629 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2136104' 00:05:40.629 killing process with pid 2136104 00:05:40.629 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2136104 00:05:40.629 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2136104 00:05:40.887 00:05:40.887 real 0m1.192s 00:05:40.887 user 0m1.318s 00:05:40.887 sys 0m0.464s 00:05:40.887 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.887 01:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.887 ************************************ 00:05:40.887 END TEST exit_on_failed_rpc_init 00:05:40.887 ************************************ 00:05:40.887 01:41:22 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:40.887 00:05:40.887 real 0m13.413s 00:05:40.887 user 0m12.640s 00:05:40.887 sys 0m1.666s 00:05:40.887 01:41:22 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.887 01:41:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.887 ************************************ 00:05:40.887 END TEST skip_rpc 00:05:40.887 ************************************ 00:05:40.887 01:41:22 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:40.887 01:41:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.887 01:41:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.887 01:41:22 -- common/autotest_common.sh@10 -- # set +x 00:05:41.146 ************************************ 00:05:41.146 START TEST rpc_client 00:05:41.146 ************************************ 00:05:41.146 01:41:22 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:41.146 * Looking for test storage... 00:05:41.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:41.146 01:41:22 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:41.146 OK 00:05:41.146 01:41:22 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:41.146 00:05:41.146 real 0m0.066s 00:05:41.146 user 0m0.025s 00:05:41.146 sys 0m0.046s 00:05:41.146 01:41:22 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.146 01:41:22 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:41.146 ************************************ 00:05:41.146 END TEST rpc_client 00:05:41.146 ************************************ 00:05:41.146 01:41:22 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:41.146 01:41:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.146 01:41:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.146 01:41:22 -- common/autotest_common.sh@10 -- # set +x 00:05:41.146 ************************************ 00:05:41.146 START TEST json_config 00:05:41.146 ************************************ 00:05:41.146 01:41:23 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:41.146 01:41:23 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.146 01:41:23 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.146 01:41:23 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.146 01:41:23 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.146 01:41:23 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.146 01:41:23 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.146 01:41:23 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.146 01:41:23 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.146 01:41:23 json_config -- paths/export.sh@5 -- # export PATH 00:05:41.147 01:41:23 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.147 01:41:23 json_config -- nvmf/common.sh@47 -- # : 0 00:05:41.147 01:41:23 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:41.147 01:41:23 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:41.147 01:41:23 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.147 01:41:23 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.147 01:41:23 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.147 01:41:23 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:41.147 01:41:23 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:41.147 01:41:23 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:41.147 INFO: JSON configuration test init 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:41.147 01:41:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.147 01:41:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:41.147 01:41:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.147 01:41:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.147 01:41:23 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:41.147 01:41:23 json_config -- json_config/common.sh@9 -- # local app=target 00:05:41.147 01:41:23 json_config -- json_config/common.sh@10 -- # shift 00:05:41.147 01:41:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.147 01:41:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.147 01:41:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.147 01:41:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.147 01:41:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.147 01:41:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2136355 00:05:41.147 01:41:23 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:41.147 01:41:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.147 Waiting for target to run... 00:05:41.147 01:41:23 json_config -- json_config/common.sh@25 -- # waitforlisten 2136355 /var/tmp/spdk_tgt.sock 00:05:41.147 01:41:23 json_config -- common/autotest_common.sh@831 -- # '[' -z 2136355 ']' 00:05:41.147 01:41:23 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.147 01:41:23 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.147 01:41:23 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.147 01:41:23 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.147 01:41:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.147 [2024-07-26 01:41:23.128005] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:05:41.147 [2024-07-26 01:41:23.128120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2136355 ] 00:05:41.405 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.662 [2024-07-26 01:41:23.461543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.662 [2024-07-26 01:41:23.524945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.227 01:41:24 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.227 01:41:24 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:42.227 01:41:24 json_config -- json_config/common.sh@26 -- # echo '' 00:05:42.227 00:05:42.227 01:41:24 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:42.227 01:41:24 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:42.227 01:41:24 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.227 01:41:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.227 01:41:24 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:42.227 01:41:24 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:42.227 01:41:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.227 01:41:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.227 01:41:24 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:42.227 01:41:24 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:42.227 01:41:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:45.507 01:41:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:45.507 01:41:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:45.507 01:41:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@51 -- # sort 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:45.507 01:41:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:45.507 01:41:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:45.507 01:41:27 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:45.508 01:41:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:45.508 01:41:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.765 01:41:27 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:45.765 01:41:27 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:45.765 01:41:27 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:45.765 01:41:27 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:45.765 01:41:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:45.765 MallocForNvmf0 00:05:45.765 01:41:27 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:45.765 01:41:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:46.023 MallocForNvmf1 00:05:46.023 01:41:28 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:46.023 01:41:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:46.281 [2024-07-26 01:41:28.258066] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.281 01:41:28 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:46.281 01:41:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:46.564 01:41:28 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:46.564 01:41:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:46.828 01:41:28 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:46.828 01:41:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:47.086 01:41:28 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:47.086 01:41:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:47.344 [2024-07-26 01:41:29.237253] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:47.344 01:41:29 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:47.344 01:41:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:47.344 01:41:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.344 01:41:29 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:47.344 01:41:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:47.344 01:41:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.344 01:41:29 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:47.344 01:41:29 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:47.344 01:41:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:47.600 MallocBdevForConfigChangeCheck 00:05:47.600 01:41:29 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:47.600 01:41:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:47.600 01:41:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.600 01:41:29 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:47.600 01:41:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.165 01:41:29 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:48.165 INFO: shutting down applications... 00:05:48.165 01:41:29 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:48.165 01:41:29 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:48.165 01:41:29 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:48.165 01:41:29 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:50.062 Calling clear_iscsi_subsystem 00:05:50.062 Calling clear_nvmf_subsystem 00:05:50.062 Calling clear_nbd_subsystem 00:05:50.062 Calling clear_ublk_subsystem 00:05:50.062 Calling clear_vhost_blk_subsystem 00:05:50.062 Calling clear_vhost_scsi_subsystem 00:05:50.062 Calling clear_bdev_subsystem 00:05:50.062 01:41:31 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:50.062 01:41:31 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:50.062 01:41:31 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:50.062 01:41:31 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.062 01:41:31 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:50.062 01:41:31 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:50.062 01:41:31 json_config -- json_config/json_config.sh@349 -- # break 00:05:50.062 01:41:31 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:50.062 01:41:31 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:50.062 01:41:31 json_config -- json_config/common.sh@31 -- # local app=target 00:05:50.062 01:41:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:50.062 01:41:31 json_config -- json_config/common.sh@35 -- # [[ -n 2136355 ]] 00:05:50.062 01:41:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2136355 00:05:50.062 01:41:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:50.062 01:41:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.062 01:41:31 json_config -- json_config/common.sh@41 -- # kill -0 2136355 00:05:50.062 01:41:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.631 01:41:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.631 01:41:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.631 01:41:32 json_config -- json_config/common.sh@41 -- # kill -0 2136355 00:05:50.631 01:41:32 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:50.631 01:41:32 json_config -- json_config/common.sh@43 -- # break 00:05:50.631 01:41:32 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:50.631 01:41:32 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:50.631 SPDK target shutdown done 00:05:50.631 01:41:32 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:50.631 INFO: relaunching applications... 00:05:50.631 01:41:32 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.631 01:41:32 json_config -- json_config/common.sh@9 -- # local app=target 00:05:50.631 01:41:32 json_config -- json_config/common.sh@10 -- # shift 00:05:50.631 01:41:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:50.631 01:41:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:50.631 01:41:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:50.631 01:41:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.631 01:41:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.631 01:41:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2137557 00:05:50.631 01:41:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.631 01:41:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:50.631 Waiting for target to run... 00:05:50.631 01:41:32 json_config -- json_config/common.sh@25 -- # waitforlisten 2137557 /var/tmp/spdk_tgt.sock 00:05:50.631 01:41:32 json_config -- common/autotest_common.sh@831 -- # '[' -z 2137557 ']' 00:05:50.631 01:41:32 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.631 01:41:32 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.631 01:41:32 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.631 01:41:32 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.631 01:41:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.631 [2024-07-26 01:41:32.500494] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:05:50.631 [2024-07-26 01:41:32.500600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2137557 ] 00:05:50.631 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.889 [2024-07-26 01:41:32.859319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.148 [2024-07-26 01:41:32.923027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.427 [2024-07-26 01:41:35.954926] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.427 [2024-07-26 01:41:35.987430] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:54.427 01:41:36 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.427 01:41:36 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:54.427 01:41:36 json_config -- json_config/common.sh@26 -- # echo '' 00:05:54.427 00:05:54.427 01:41:36 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:54.427 01:41:36 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:54.427 INFO: Checking if target configuration is the same... 00:05:54.427 01:41:36 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.427 01:41:36 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:54.427 01:41:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.427 + '[' 2 -ne 2 ']' 00:05:54.427 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:54.427 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:54.427 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:54.427 +++ basename /dev/fd/62 00:05:54.427 ++ mktemp /tmp/62.XXX 00:05:54.427 + tmp_file_1=/tmp/62.fDH 00:05:54.427 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.427 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:54.427 + tmp_file_2=/tmp/spdk_tgt_config.json.WpF 00:05:54.427 + ret=0 00:05:54.427 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:54.427 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:54.685 + diff -u /tmp/62.fDH /tmp/spdk_tgt_config.json.WpF 00:05:54.685 + echo 'INFO: JSON config files are the same' 00:05:54.685 INFO: JSON config files are the same 00:05:54.685 + rm /tmp/62.fDH /tmp/spdk_tgt_config.json.WpF 00:05:54.685 + exit 0 00:05:54.685 01:41:36 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:54.685 01:41:36 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:54.685 INFO: changing configuration and checking if this can be detected... 00:05:54.685 01:41:36 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:54.685 01:41:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:54.685 01:41:36 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.685 01:41:36 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:54.685 01:41:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.943 + '[' 2 -ne 2 ']' 00:05:54.943 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:54.943 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:54.943 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:54.943 +++ basename /dev/fd/62 00:05:54.943 ++ mktemp /tmp/62.XXX 00:05:54.943 + tmp_file_1=/tmp/62.m7C 00:05:54.943 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.943 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:54.943 + tmp_file_2=/tmp/spdk_tgt_config.json.KVt 00:05:54.943 + ret=0 00:05:54.943 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.201 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.201 + diff -u /tmp/62.m7C /tmp/spdk_tgt_config.json.KVt 00:05:55.202 + ret=1 00:05:55.202 + echo '=== Start of file: /tmp/62.m7C ===' 00:05:55.202 + cat /tmp/62.m7C 00:05:55.202 + echo '=== End of file: /tmp/62.m7C ===' 00:05:55.202 + echo '' 00:05:55.202 + echo '=== Start of file: /tmp/spdk_tgt_config.json.KVt ===' 00:05:55.202 + cat /tmp/spdk_tgt_config.json.KVt 00:05:55.202 + echo '=== End of file: /tmp/spdk_tgt_config.json.KVt ===' 00:05:55.202 + echo '' 00:05:55.202 + rm /tmp/62.m7C /tmp/spdk_tgt_config.json.KVt 00:05:55.202 + exit 1 00:05:55.202 01:41:37 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:55.202 INFO: configuration change detected. 00:05:55.202 01:41:37 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:55.202 01:41:37 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:55.202 01:41:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:55.202 01:41:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.202 01:41:37 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:55.202 01:41:37 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:55.202 01:41:37 json_config -- json_config/json_config.sh@321 -- # [[ -n 2137557 ]] 00:05:55.202 01:41:37 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:55.202 01:41:37 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:55.202 01:41:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:55.202 01:41:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.202 01:41:37 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:55.202 01:41:37 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:55.202 01:41:37 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:55.202 01:41:37 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:55.202 01:41:37 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:55.202 01:41:37 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:55.202 01:41:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:55.202 01:41:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.202 01:41:37 json_config -- json_config/json_config.sh@327 -- # killprocess 2137557 00:05:55.202 01:41:37 json_config -- common/autotest_common.sh@950 -- # '[' -z 2137557 ']' 00:05:55.202 01:41:37 json_config -- common/autotest_common.sh@954 -- # kill -0 2137557 00:05:55.202 01:41:37 json_config -- common/autotest_common.sh@955 -- # uname 00:05:55.202 01:41:37 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.202 01:41:37 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2137557 00:05:55.202 01:41:37 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.202 01:41:37 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.202 01:41:37 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2137557' 00:05:55.202 killing process with pid 2137557 00:05:55.202 01:41:37 json_config -- common/autotest_common.sh@969 -- # kill 2137557 00:05:55.202 01:41:37 json_config -- common/autotest_common.sh@974 -- # wait 2137557 00:05:57.101 01:41:38 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.101 01:41:38 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:57.101 01:41:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:57.101 01:41:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.101 01:41:38 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:57.101 01:41:38 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:57.101 INFO: Success 00:05:57.101 00:05:57.101 real 0m15.787s 00:05:57.101 user 0m17.699s 00:05:57.101 sys 0m1.846s 00:05:57.101 01:41:38 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.101 01:41:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.101 ************************************ 00:05:57.102 END TEST json_config 00:05:57.102 ************************************ 00:05:57.102 01:41:38 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:57.102 01:41:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.102 01:41:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.102 01:41:38 -- common/autotest_common.sh@10 -- # set +x 00:05:57.102 ************************************ 00:05:57.102 START TEST json_config_extra_key 00:05:57.102 ************************************ 00:05:57.102 01:41:38 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:57.102 01:41:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.102 01:41:38 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.102 01:41:38 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.102 01:41:38 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.102 01:41:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.102 01:41:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.102 01:41:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.102 01:41:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:57.102 01:41:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:57.102 01:41:38 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:57.102 01:41:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:57.102 01:41:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:57.102 01:41:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:57.102 01:41:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:57.102 01:41:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:57.102 01:41:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:57.102 01:41:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:57.102 01:41:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:57.102 01:41:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:57.102 01:41:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.102 01:41:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:57.102 INFO: launching applications... 00:05:57.102 01:41:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.102 01:41:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:57.102 01:41:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:57.102 01:41:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.102 01:41:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.102 01:41:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.102 01:41:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.102 01:41:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.102 01:41:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2138463 00:05:57.102 01:41:38 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.102 01:41:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.102 Waiting for target to run... 00:05:57.102 01:41:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2138463 /var/tmp/spdk_tgt.sock 00:05:57.102 01:41:38 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2138463 ']' 00:05:57.102 01:41:38 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.102 01:41:38 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.102 01:41:38 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.102 01:41:38 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.102 01:41:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.102 [2024-07-26 01:41:38.958569] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:05:57.102 [2024-07-26 01:41:38.958657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138463 ] 00:05:57.102 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.670 [2024-07-26 01:41:39.452817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.670 [2024-07-26 01:41:39.535075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.928 01:41:39 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.928 01:41:39 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:57.928 01:41:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:57.928 00:05:57.928 01:41:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:57.928 INFO: shutting down applications... 00:05:57.928 01:41:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:57.928 01:41:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:57.928 01:41:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:57.928 01:41:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2138463 ]] 00:05:57.928 01:41:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2138463 00:05:57.928 01:41:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:57.928 01:41:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.928 01:41:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2138463 00:05:57.928 01:41:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:58.494 01:41:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:58.494 01:41:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.494 01:41:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2138463 00:05:58.494 01:41:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:58.494 01:41:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:58.494 01:41:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:58.494 01:41:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:58.494 SPDK target shutdown done 00:05:58.494 01:41:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:58.494 Success 00:05:58.494 00:05:58.494 real 0m1.551s 00:05:58.494 user 0m1.340s 00:05:58.494 sys 0m0.607s 00:05:58.494 01:41:40 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.494 01:41:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:58.494 ************************************ 00:05:58.494 END TEST json_config_extra_key 00:05:58.494 ************************************ 00:05:58.494 01:41:40 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.494 01:41:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.494 01:41:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.494 01:41:40 -- common/autotest_common.sh@10 -- # set +x 00:05:58.494 ************************************ 00:05:58.494 START TEST alias_rpc 00:05:58.494 ************************************ 00:05:58.494 01:41:40 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.494 * Looking for test storage... 00:05:58.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:58.494 01:41:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:58.494 01:41:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2138772 00:05:58.494 01:41:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.494 01:41:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2138772 00:05:58.494 01:41:40 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2138772 ']' 00:05:58.494 01:41:40 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.494 01:41:40 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.494 01:41:40 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.494 01:41:40 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.494 01:41:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.752 [2024-07-26 01:41:40.551066] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:05:58.752 [2024-07-26 01:41:40.551146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138772 ] 00:05:58.752 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.752 [2024-07-26 01:41:40.608147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.752 [2024-07-26 01:41:40.691115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.011 01:41:40 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.011 01:41:40 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:59.011 01:41:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:59.269 01:41:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2138772 00:05:59.269 01:41:41 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2138772 ']' 00:05:59.269 01:41:41 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2138772 00:05:59.269 01:41:41 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:59.269 01:41:41 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.269 01:41:41 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2138772 00:05:59.269 01:41:41 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.269 01:41:41 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.269 01:41:41 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2138772' 00:05:59.269 killing process with pid 2138772 00:05:59.269 01:41:41 alias_rpc -- common/autotest_common.sh@969 -- # kill 2138772 00:05:59.269 01:41:41 alias_rpc -- common/autotest_common.sh@974 -- # wait 2138772 00:05:59.835 00:05:59.835 real 0m1.198s 00:05:59.835 user 0m1.267s 00:05:59.835 sys 0m0.424s 00:05:59.835 01:41:41 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.835 01:41:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.835 ************************************ 00:05:59.835 END TEST alias_rpc 00:05:59.835 ************************************ 00:05:59.835 01:41:41 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:59.835 01:41:41 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:59.835 01:41:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.835 01:41:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.835 01:41:41 -- common/autotest_common.sh@10 -- # set +x 00:05:59.835 ************************************ 00:05:59.835 START TEST spdkcli_tcp 00:05:59.835 ************************************ 00:05:59.835 01:41:41 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:59.835 * Looking for test storage... 00:05:59.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:59.835 01:41:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:59.835 01:41:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:59.835 01:41:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:59.835 01:41:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:59.835 01:41:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:59.835 01:41:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:59.835 01:41:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:59.835 01:41:41 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:59.835 01:41:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.835 01:41:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2138959 00:05:59.835 01:41:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:59.835 01:41:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2138959 00:05:59.835 01:41:41 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2138959 ']' 00:05:59.835 01:41:41 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.835 01:41:41 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.835 01:41:41 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.835 01:41:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.835 01:41:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.835 [2024-07-26 01:41:41.804283] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:05:59.835 [2024-07-26 01:41:41.804365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138959 ] 00:05:59.835 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.093 [2024-07-26 01:41:41.862231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.093 [2024-07-26 01:41:41.946497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.093 [2024-07-26 01:41:41.946501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.351 01:41:42 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.351 01:41:42 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:00.351 01:41:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2138963 00:06:00.351 01:41:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:00.351 01:41:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:00.610 [ 00:06:00.610 "bdev_malloc_delete", 00:06:00.610 "bdev_malloc_create", 00:06:00.610 "bdev_null_resize", 00:06:00.610 "bdev_null_delete", 00:06:00.610 "bdev_null_create", 00:06:00.610 "bdev_nvme_cuse_unregister", 00:06:00.610 "bdev_nvme_cuse_register", 00:06:00.610 "bdev_opal_new_user", 00:06:00.610 "bdev_opal_set_lock_state", 00:06:00.610 "bdev_opal_delete", 00:06:00.610 "bdev_opal_get_info", 00:06:00.610 "bdev_opal_create", 00:06:00.610 "bdev_nvme_opal_revert", 00:06:00.610 "bdev_nvme_opal_init", 00:06:00.610 "bdev_nvme_send_cmd", 00:06:00.610 "bdev_nvme_get_path_iostat", 00:06:00.610 "bdev_nvme_get_mdns_discovery_info", 00:06:00.610 "bdev_nvme_stop_mdns_discovery", 00:06:00.610 "bdev_nvme_start_mdns_discovery", 00:06:00.610 "bdev_nvme_set_multipath_policy", 00:06:00.610 "bdev_nvme_set_preferred_path", 00:06:00.610 "bdev_nvme_get_io_paths", 00:06:00.610 "bdev_nvme_remove_error_injection", 00:06:00.610 "bdev_nvme_add_error_injection", 00:06:00.610 "bdev_nvme_get_discovery_info", 00:06:00.610 "bdev_nvme_stop_discovery", 00:06:00.610 "bdev_nvme_start_discovery", 00:06:00.610 "bdev_nvme_get_controller_health_info", 00:06:00.610 "bdev_nvme_disable_controller", 00:06:00.610 "bdev_nvme_enable_controller", 00:06:00.610 "bdev_nvme_reset_controller", 00:06:00.610 "bdev_nvme_get_transport_statistics", 00:06:00.610 "bdev_nvme_apply_firmware", 00:06:00.610 "bdev_nvme_detach_controller", 00:06:00.610 "bdev_nvme_get_controllers", 00:06:00.610 "bdev_nvme_attach_controller", 00:06:00.610 "bdev_nvme_set_hotplug", 00:06:00.610 "bdev_nvme_set_options", 00:06:00.610 "bdev_passthru_delete", 00:06:00.610 "bdev_passthru_create", 00:06:00.610 "bdev_lvol_set_parent_bdev", 00:06:00.610 "bdev_lvol_set_parent", 00:06:00.610 "bdev_lvol_check_shallow_copy", 00:06:00.610 "bdev_lvol_start_shallow_copy", 00:06:00.610 "bdev_lvol_grow_lvstore", 00:06:00.610 "bdev_lvol_get_lvols", 00:06:00.610 "bdev_lvol_get_lvstores", 00:06:00.610 "bdev_lvol_delete", 00:06:00.610 "bdev_lvol_set_read_only", 00:06:00.610 "bdev_lvol_resize", 00:06:00.610 "bdev_lvol_decouple_parent", 00:06:00.610 "bdev_lvol_inflate", 00:06:00.610 "bdev_lvol_rename", 00:06:00.610 "bdev_lvol_clone_bdev", 00:06:00.610 "bdev_lvol_clone", 00:06:00.610 "bdev_lvol_snapshot", 00:06:00.610 "bdev_lvol_create", 00:06:00.610 "bdev_lvol_delete_lvstore", 00:06:00.610 "bdev_lvol_rename_lvstore", 00:06:00.610 "bdev_lvol_create_lvstore", 00:06:00.610 "bdev_raid_set_options", 00:06:00.610 "bdev_raid_remove_base_bdev", 00:06:00.610 "bdev_raid_add_base_bdev", 00:06:00.610 "bdev_raid_delete", 00:06:00.610 "bdev_raid_create", 00:06:00.610 "bdev_raid_get_bdevs", 00:06:00.610 "bdev_error_inject_error", 00:06:00.610 "bdev_error_delete", 00:06:00.610 "bdev_error_create", 00:06:00.610 "bdev_split_delete", 00:06:00.610 "bdev_split_create", 00:06:00.610 "bdev_delay_delete", 00:06:00.610 "bdev_delay_create", 00:06:00.610 "bdev_delay_update_latency", 00:06:00.610 "bdev_zone_block_delete", 00:06:00.610 "bdev_zone_block_create", 00:06:00.610 "blobfs_create", 00:06:00.610 "blobfs_detect", 00:06:00.610 "blobfs_set_cache_size", 00:06:00.610 "bdev_aio_delete", 00:06:00.610 "bdev_aio_rescan", 00:06:00.610 "bdev_aio_create", 00:06:00.610 "bdev_ftl_set_property", 00:06:00.610 "bdev_ftl_get_properties", 00:06:00.610 "bdev_ftl_get_stats", 00:06:00.610 "bdev_ftl_unmap", 00:06:00.610 "bdev_ftl_unload", 00:06:00.610 "bdev_ftl_delete", 00:06:00.610 "bdev_ftl_load", 00:06:00.610 "bdev_ftl_create", 00:06:00.610 "bdev_virtio_attach_controller", 00:06:00.610 "bdev_virtio_scsi_get_devices", 00:06:00.611 "bdev_virtio_detach_controller", 00:06:00.611 "bdev_virtio_blk_set_hotplug", 00:06:00.611 "bdev_iscsi_delete", 00:06:00.611 "bdev_iscsi_create", 00:06:00.611 "bdev_iscsi_set_options", 00:06:00.611 "accel_error_inject_error", 00:06:00.611 "ioat_scan_accel_module", 00:06:00.611 "dsa_scan_accel_module", 00:06:00.611 "iaa_scan_accel_module", 00:06:00.611 "vfu_virtio_create_scsi_endpoint", 00:06:00.611 "vfu_virtio_scsi_remove_target", 00:06:00.611 "vfu_virtio_scsi_add_target", 00:06:00.611 "vfu_virtio_create_blk_endpoint", 00:06:00.611 "vfu_virtio_delete_endpoint", 00:06:00.611 "keyring_file_remove_key", 00:06:00.611 "keyring_file_add_key", 00:06:00.611 "keyring_linux_set_options", 00:06:00.611 "iscsi_get_histogram", 00:06:00.611 "iscsi_enable_histogram", 00:06:00.611 "iscsi_set_options", 00:06:00.611 "iscsi_get_auth_groups", 00:06:00.611 "iscsi_auth_group_remove_secret", 00:06:00.611 "iscsi_auth_group_add_secret", 00:06:00.611 "iscsi_delete_auth_group", 00:06:00.611 "iscsi_create_auth_group", 00:06:00.611 "iscsi_set_discovery_auth", 00:06:00.611 "iscsi_get_options", 00:06:00.611 "iscsi_target_node_request_logout", 00:06:00.611 "iscsi_target_node_set_redirect", 00:06:00.611 "iscsi_target_node_set_auth", 00:06:00.611 "iscsi_target_node_add_lun", 00:06:00.611 "iscsi_get_stats", 00:06:00.611 "iscsi_get_connections", 00:06:00.611 "iscsi_portal_group_set_auth", 00:06:00.611 "iscsi_start_portal_group", 00:06:00.611 "iscsi_delete_portal_group", 00:06:00.611 "iscsi_create_portal_group", 00:06:00.611 "iscsi_get_portal_groups", 00:06:00.611 "iscsi_delete_target_node", 00:06:00.611 "iscsi_target_node_remove_pg_ig_maps", 00:06:00.611 "iscsi_target_node_add_pg_ig_maps", 00:06:00.611 "iscsi_create_target_node", 00:06:00.611 "iscsi_get_target_nodes", 00:06:00.611 "iscsi_delete_initiator_group", 00:06:00.611 "iscsi_initiator_group_remove_initiators", 00:06:00.611 "iscsi_initiator_group_add_initiators", 00:06:00.611 "iscsi_create_initiator_group", 00:06:00.611 "iscsi_get_initiator_groups", 00:06:00.611 "nvmf_set_crdt", 00:06:00.611 "nvmf_set_config", 00:06:00.611 "nvmf_set_max_subsystems", 00:06:00.611 "nvmf_stop_mdns_prr", 00:06:00.611 "nvmf_publish_mdns_prr", 00:06:00.611 "nvmf_subsystem_get_listeners", 00:06:00.611 "nvmf_subsystem_get_qpairs", 00:06:00.611 "nvmf_subsystem_get_controllers", 00:06:00.611 "nvmf_get_stats", 00:06:00.611 "nvmf_get_transports", 00:06:00.611 "nvmf_create_transport", 00:06:00.611 "nvmf_get_targets", 00:06:00.611 "nvmf_delete_target", 00:06:00.611 "nvmf_create_target", 00:06:00.611 "nvmf_subsystem_allow_any_host", 00:06:00.611 "nvmf_subsystem_remove_host", 00:06:00.611 "nvmf_subsystem_add_host", 00:06:00.611 "nvmf_ns_remove_host", 00:06:00.611 "nvmf_ns_add_host", 00:06:00.611 "nvmf_subsystem_remove_ns", 00:06:00.611 "nvmf_subsystem_add_ns", 00:06:00.611 "nvmf_subsystem_listener_set_ana_state", 00:06:00.611 "nvmf_discovery_get_referrals", 00:06:00.611 "nvmf_discovery_remove_referral", 00:06:00.611 "nvmf_discovery_add_referral", 00:06:00.611 "nvmf_subsystem_remove_listener", 00:06:00.611 "nvmf_subsystem_add_listener", 00:06:00.611 "nvmf_delete_subsystem", 00:06:00.611 "nvmf_create_subsystem", 00:06:00.611 "nvmf_get_subsystems", 00:06:00.611 "env_dpdk_get_mem_stats", 00:06:00.611 "nbd_get_disks", 00:06:00.611 "nbd_stop_disk", 00:06:00.611 "nbd_start_disk", 00:06:00.611 "ublk_recover_disk", 00:06:00.611 "ublk_get_disks", 00:06:00.611 "ublk_stop_disk", 00:06:00.611 "ublk_start_disk", 00:06:00.611 "ublk_destroy_target", 00:06:00.611 "ublk_create_target", 00:06:00.611 "virtio_blk_create_transport", 00:06:00.611 "virtio_blk_get_transports", 00:06:00.611 "vhost_controller_set_coalescing", 00:06:00.611 "vhost_get_controllers", 00:06:00.611 "vhost_delete_controller", 00:06:00.611 "vhost_create_blk_controller", 00:06:00.611 "vhost_scsi_controller_remove_target", 00:06:00.611 "vhost_scsi_controller_add_target", 00:06:00.611 "vhost_start_scsi_controller", 00:06:00.611 "vhost_create_scsi_controller", 00:06:00.611 "thread_set_cpumask", 00:06:00.611 "framework_get_governor", 00:06:00.611 "framework_get_scheduler", 00:06:00.611 "framework_set_scheduler", 00:06:00.611 "framework_get_reactors", 00:06:00.611 "thread_get_io_channels", 00:06:00.611 "thread_get_pollers", 00:06:00.611 "thread_get_stats", 00:06:00.611 "framework_monitor_context_switch", 00:06:00.611 "spdk_kill_instance", 00:06:00.611 "log_enable_timestamps", 00:06:00.611 "log_get_flags", 00:06:00.611 "log_clear_flag", 00:06:00.611 "log_set_flag", 00:06:00.611 "log_get_level", 00:06:00.611 "log_set_level", 00:06:00.611 "log_get_print_level", 00:06:00.611 "log_set_print_level", 00:06:00.611 "framework_enable_cpumask_locks", 00:06:00.611 "framework_disable_cpumask_locks", 00:06:00.611 "framework_wait_init", 00:06:00.611 "framework_start_init", 00:06:00.611 "scsi_get_devices", 00:06:00.611 "bdev_get_histogram", 00:06:00.611 "bdev_enable_histogram", 00:06:00.611 "bdev_set_qos_limit", 00:06:00.611 "bdev_set_qd_sampling_period", 00:06:00.611 "bdev_get_bdevs", 00:06:00.611 "bdev_reset_iostat", 00:06:00.611 "bdev_get_iostat", 00:06:00.611 "bdev_examine", 00:06:00.611 "bdev_wait_for_examine", 00:06:00.611 "bdev_set_options", 00:06:00.611 "notify_get_notifications", 00:06:00.611 "notify_get_types", 00:06:00.611 "accel_get_stats", 00:06:00.611 "accel_set_options", 00:06:00.611 "accel_set_driver", 00:06:00.611 "accel_crypto_key_destroy", 00:06:00.611 "accel_crypto_keys_get", 00:06:00.611 "accel_crypto_key_create", 00:06:00.611 "accel_assign_opc", 00:06:00.611 "accel_get_module_info", 00:06:00.611 "accel_get_opc_assignments", 00:06:00.611 "vmd_rescan", 00:06:00.611 "vmd_remove_device", 00:06:00.611 "vmd_enable", 00:06:00.611 "sock_get_default_impl", 00:06:00.611 "sock_set_default_impl", 00:06:00.611 "sock_impl_set_options", 00:06:00.611 "sock_impl_get_options", 00:06:00.611 "iobuf_get_stats", 00:06:00.611 "iobuf_set_options", 00:06:00.611 "keyring_get_keys", 00:06:00.611 "framework_get_pci_devices", 00:06:00.611 "framework_get_config", 00:06:00.611 "framework_get_subsystems", 00:06:00.611 "vfu_tgt_set_base_path", 00:06:00.611 "trace_get_info", 00:06:00.611 "trace_get_tpoint_group_mask", 00:06:00.611 "trace_disable_tpoint_group", 00:06:00.611 "trace_enable_tpoint_group", 00:06:00.611 "trace_clear_tpoint_mask", 00:06:00.611 "trace_set_tpoint_mask", 00:06:00.611 "spdk_get_version", 00:06:00.611 "rpc_get_methods" 00:06:00.611 ] 00:06:00.611 01:41:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:00.611 01:41:42 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:00.611 01:41:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.611 01:41:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:00.611 01:41:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2138959 00:06:00.611 01:41:42 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2138959 ']' 00:06:00.611 01:41:42 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2138959 00:06:00.611 01:41:42 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:00.611 01:41:42 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.611 01:41:42 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2138959 00:06:00.611 01:41:42 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.611 01:41:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.611 01:41:42 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2138959' 00:06:00.611 killing process with pid 2138959 00:06:00.611 01:41:42 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2138959 00:06:00.611 01:41:42 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2138959 00:06:01.179 00:06:01.179 real 0m1.210s 00:06:01.179 user 0m2.163s 00:06:01.179 sys 0m0.449s 00:06:01.179 01:41:42 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.179 01:41:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.179 ************************************ 00:06:01.179 END TEST spdkcli_tcp 00:06:01.179 ************************************ 00:06:01.179 01:41:42 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.179 01:41:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.179 01:41:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.179 01:41:42 -- common/autotest_common.sh@10 -- # set +x 00:06:01.179 ************************************ 00:06:01.179 START TEST dpdk_mem_utility 00:06:01.179 ************************************ 00:06:01.179 01:41:42 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.179 * Looking for test storage... 00:06:01.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:01.179 01:41:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:01.179 01:41:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2139159 00:06:01.179 01:41:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.179 01:41:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2139159 00:06:01.179 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2139159 ']' 00:06:01.179 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.179 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.179 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.179 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.179 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.179 [2024-07-26 01:41:43.052266] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:01.179 [2024-07-26 01:41:43.052366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139159 ] 00:06:01.179 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.179 [2024-07-26 01:41:43.109186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.438 [2024-07-26 01:41:43.193252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.438 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.438 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:01.438 01:41:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:01.438 01:41:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:01.438 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.438 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.695 { 00:06:01.695 "filename": "/tmp/spdk_mem_dump.txt" 00:06:01.695 } 00:06:01.695 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.695 01:41:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:01.695 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:01.695 1 heaps totaling size 814.000000 MiB 00:06:01.695 size: 814.000000 MiB heap id: 0 00:06:01.695 end heaps---------- 00:06:01.695 8 mempools totaling size 598.116089 MiB 00:06:01.695 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:01.695 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:01.695 size: 84.521057 MiB name: bdev_io_2139159 00:06:01.695 size: 51.011292 MiB name: evtpool_2139159 00:06:01.695 size: 50.003479 MiB name: msgpool_2139159 00:06:01.695 size: 21.763794 MiB name: PDU_Pool 00:06:01.695 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:01.695 size: 0.026123 MiB name: Session_Pool 00:06:01.695 end mempools------- 00:06:01.695 6 memzones totaling size 4.142822 MiB 00:06:01.695 size: 1.000366 MiB name: RG_ring_0_2139159 00:06:01.695 size: 1.000366 MiB name: RG_ring_1_2139159 00:06:01.695 size: 1.000366 MiB name: RG_ring_4_2139159 00:06:01.695 size: 1.000366 MiB name: RG_ring_5_2139159 00:06:01.695 size: 0.125366 MiB name: RG_ring_2_2139159 00:06:01.695 size: 0.015991 MiB name: RG_ring_3_2139159 00:06:01.695 end memzones------- 00:06:01.695 01:41:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:01.695 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:01.695 list of free elements. size: 12.519348 MiB 00:06:01.695 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:01.695 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:01.695 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:01.695 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:01.695 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:01.695 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:01.695 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:01.695 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:01.695 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:01.695 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:01.695 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:01.695 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:01.695 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:01.695 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:01.695 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:01.695 list of standard malloc elements. size: 199.218079 MiB 00:06:01.695 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:01.695 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:01.695 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:01.695 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:01.695 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:01.695 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:01.695 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:01.695 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:01.695 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:01.695 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:01.695 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:01.695 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:01.695 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:01.695 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:01.695 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:01.695 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:01.695 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:01.695 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:01.695 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:01.695 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:01.695 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:01.695 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:01.695 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:01.695 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:01.695 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:01.695 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:01.695 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:01.695 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:01.696 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:01.696 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:01.696 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:01.696 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:01.696 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:01.696 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:01.696 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:01.696 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:01.696 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:01.696 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:01.696 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:01.696 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:01.696 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:01.696 list of memzone associated elements. size: 602.262573 MiB 00:06:01.696 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:01.696 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:01.696 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:01.696 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:01.696 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:01.696 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2139159_0 00:06:01.696 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:01.696 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2139159_0 00:06:01.696 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:01.696 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2139159_0 00:06:01.696 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:01.696 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:01.696 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:01.696 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:01.696 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:01.696 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2139159 00:06:01.696 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:01.696 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2139159 00:06:01.696 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:01.696 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2139159 00:06:01.696 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:01.696 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:01.696 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:01.696 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:01.696 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:01.696 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:01.696 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:01.696 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:01.696 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:01.696 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2139159 00:06:01.696 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:01.696 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2139159 00:06:01.696 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:01.696 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2139159 00:06:01.696 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:01.696 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2139159 00:06:01.696 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:01.696 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2139159 00:06:01.696 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:01.696 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:01.696 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:01.696 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:01.696 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:01.696 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:01.696 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:01.696 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2139159 00:06:01.696 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:01.696 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:01.696 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:01.696 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:01.696 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:01.696 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2139159 00:06:01.696 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:01.696 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:01.696 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:01.696 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2139159 00:06:01.696 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:01.696 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2139159 00:06:01.696 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:01.696 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:01.696 01:41:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:01.696 01:41:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2139159 00:06:01.696 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2139159 ']' 00:06:01.696 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2139159 00:06:01.696 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:01.696 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.696 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2139159 00:06:01.696 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.696 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.696 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2139159' 00:06:01.696 killing process with pid 2139159 00:06:01.696 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2139159 00:06:01.696 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2139159 00:06:02.261 00:06:02.261 real 0m1.029s 00:06:02.261 user 0m0.996s 00:06:02.261 sys 0m0.396s 00:06:02.261 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.261 01:41:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.261 ************************************ 00:06:02.261 END TEST dpdk_mem_utility 00:06:02.261 ************************************ 00:06:02.261 01:41:44 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:02.261 01:41:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.261 01:41:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.261 01:41:44 -- common/autotest_common.sh@10 -- # set +x 00:06:02.261 ************************************ 00:06:02.261 START TEST event 00:06:02.261 ************************************ 00:06:02.261 01:41:44 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:02.261 * Looking for test storage... 00:06:02.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:02.261 01:41:44 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:02.261 01:41:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:02.261 01:41:44 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.261 01:41:44 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:02.261 01:41:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.261 01:41:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.261 ************************************ 00:06:02.261 START TEST event_perf 00:06:02.261 ************************************ 00:06:02.261 01:41:44 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.261 Running I/O for 1 seconds...[2024-07-26 01:41:44.123197] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:02.261 [2024-07-26 01:41:44.123261] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139347 ] 00:06:02.261 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.261 [2024-07-26 01:41:44.186259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.519 [2024-07-26 01:41:44.279109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.519 [2024-07-26 01:41:44.279159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.519 [2024-07-26 01:41:44.279274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.519 [2024-07-26 01:41:44.279277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.448 Running I/O for 1 seconds... 00:06:03.448 lcore 0: 239458 00:06:03.448 lcore 1: 239456 00:06:03.448 lcore 2: 239455 00:06:03.448 lcore 3: 239456 00:06:03.448 done. 00:06:03.448 00:06:03.448 real 0m1.254s 00:06:03.448 user 0m4.173s 00:06:03.448 sys 0m0.076s 00:06:03.448 01:41:45 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.448 01:41:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.448 ************************************ 00:06:03.448 END TEST event_perf 00:06:03.448 ************************************ 00:06:03.448 01:41:45 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:03.448 01:41:45 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:03.449 01:41:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.449 01:41:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.449 ************************************ 00:06:03.449 START TEST event_reactor 00:06:03.449 ************************************ 00:06:03.449 01:41:45 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:03.449 [2024-07-26 01:41:45.421598] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:03.449 [2024-07-26 01:41:45.421665] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139510 ] 00:06:03.449 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.706 [2024-07-26 01:41:45.483613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.706 [2024-07-26 01:41:45.575898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.110 test_start 00:06:05.110 oneshot 00:06:05.110 tick 100 00:06:05.110 tick 100 00:06:05.110 tick 250 00:06:05.110 tick 100 00:06:05.110 tick 100 00:06:05.110 tick 100 00:06:05.110 tick 250 00:06:05.110 tick 500 00:06:05.110 tick 100 00:06:05.110 tick 100 00:06:05.110 tick 250 00:06:05.110 tick 100 00:06:05.110 tick 100 00:06:05.110 test_end 00:06:05.110 00:06:05.110 real 0m1.250s 00:06:05.110 user 0m1.154s 00:06:05.110 sys 0m0.092s 00:06:05.110 01:41:46 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.110 01:41:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:05.110 ************************************ 00:06:05.110 END TEST event_reactor 00:06:05.110 ************************************ 00:06:05.110 01:41:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.110 01:41:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:05.110 01:41:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.110 01:41:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.110 ************************************ 00:06:05.110 START TEST event_reactor_perf 00:06:05.110 ************************************ 00:06:05.110 01:41:46 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.110 [2024-07-26 01:41:46.724319] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:05.110 [2024-07-26 01:41:46.724408] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139662 ] 00:06:05.110 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.110 [2024-07-26 01:41:46.787845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.110 [2024-07-26 01:41:46.877969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.044 test_start 00:06:06.044 test_end 00:06:06.044 Performance: 350386 events per second 00:06:06.044 00:06:06.044 real 0m1.249s 00:06:06.044 user 0m1.163s 00:06:06.044 sys 0m0.081s 00:06:06.045 01:41:47 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.045 01:41:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.045 ************************************ 00:06:06.045 END TEST event_reactor_perf 00:06:06.045 ************************************ 00:06:06.045 01:41:47 event -- event/event.sh@49 -- # uname -s 00:06:06.045 01:41:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:06.045 01:41:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:06.045 01:41:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.045 01:41:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.045 01:41:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.045 ************************************ 00:06:06.045 START TEST event_scheduler 00:06:06.045 ************************************ 00:06:06.045 01:41:48 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:06.045 * Looking for test storage... 00:06:06.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:06.302 01:41:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:06.302 01:41:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2139848 00:06:06.302 01:41:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:06.302 01:41:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.302 01:41:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2139848 00:06:06.302 01:41:48 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2139848 ']' 00:06:06.302 01:41:48 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.302 01:41:48 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.302 01:41:48 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.302 01:41:48 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.302 01:41:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.302 [2024-07-26 01:41:48.098539] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:06.302 [2024-07-26 01:41:48.098616] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139848 ] 00:06:06.302 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.302 [2024-07-26 01:41:48.162772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.302 [2024-07-26 01:41:48.254176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.302 [2024-07-26 01:41:48.254241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.302 [2024-07-26 01:41:48.254307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.302 [2024-07-26 01:41:48.254310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.561 01:41:48 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.561 01:41:48 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:06.561 01:41:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:06.561 01:41:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.561 01:41:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.561 [2024-07-26 01:41:48.319184] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:06.561 [2024-07-26 01:41:48.319210] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:06.561 [2024-07-26 01:41:48.319227] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:06.561 [2024-07-26 01:41:48.319238] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:06.561 [2024-07-26 01:41:48.319249] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:06.561 01:41:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.561 01:41:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:06.561 01:41:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.561 01:41:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.561 [2024-07-26 01:41:48.412997] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:06.561 01:41:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.561 01:41:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:06.561 01:41:48 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.561 01:41:48 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.561 01:41:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.561 ************************************ 00:06:06.561 START TEST scheduler_create_thread 00:06:06.561 ************************************ 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.561 2 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.561 3 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.561 4 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.561 5 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.561 6 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.561 7 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.561 8 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.561 9 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.561 10 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.561 01:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.125 01:41:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.125 00:06:07.125 real 0m0.591s 00:06:07.125 user 0m0.009s 00:06:07.125 sys 0m0.004s 00:06:07.125 01:41:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.125 01:41:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.125 ************************************ 00:06:07.125 END TEST scheduler_create_thread 00:06:07.125 ************************************ 00:06:07.125 01:41:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:07.125 01:41:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2139848 00:06:07.125 01:41:49 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2139848 ']' 00:06:07.125 01:41:49 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2139848 00:06:07.126 01:41:49 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:07.126 01:41:49 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.126 01:41:49 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2139848 00:06:07.126 01:41:49 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:07.126 01:41:49 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:07.126 01:41:49 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2139848' 00:06:07.126 killing process with pid 2139848 00:06:07.126 01:41:49 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2139848 00:06:07.126 01:41:49 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2139848 00:06:07.690 [2024-07-26 01:41:49.513179] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:07.947 00:06:07.947 real 0m1.728s 00:06:07.947 user 0m2.229s 00:06:07.947 sys 0m0.350s 00:06:07.947 01:41:49 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.947 01:41:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.947 ************************************ 00:06:07.947 END TEST event_scheduler 00:06:07.947 ************************************ 00:06:07.947 01:41:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:07.947 01:41:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:07.947 01:41:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.947 01:41:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.947 01:41:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.947 ************************************ 00:06:07.947 START TEST app_repeat 00:06:07.947 ************************************ 00:06:07.947 01:41:49 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:07.947 01:41:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.947 01:41:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.947 01:41:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:07.947 01:41:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.947 01:41:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:07.947 01:41:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:07.947 01:41:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:07.947 01:41:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2140158 00:06:07.947 01:41:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:07.948 01:41:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.948 01:41:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2140158' 00:06:07.948 Process app_repeat pid: 2140158 00:06:07.948 01:41:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:07.948 01:41:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:07.948 spdk_app_start Round 0 00:06:07.948 01:41:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2140158 /var/tmp/spdk-nbd.sock 00:06:07.948 01:41:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2140158 ']' 00:06:07.948 01:41:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.948 01:41:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.948 01:41:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.948 01:41:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.948 01:41:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.948 [2024-07-26 01:41:49.818241] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:07.948 [2024-07-26 01:41:49.818306] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140158 ] 00:06:07.948 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.948 [2024-07-26 01:41:49.881781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.205 [2024-07-26 01:41:49.971767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.205 [2024-07-26 01:41:49.971772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.205 01:41:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.205 01:41:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:08.205 01:41:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.463 Malloc0 00:06:08.463 01:41:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.721 Malloc1 00:06:08.721 01:41:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.721 01:41:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.721 01:41:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.721 01:41:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.721 01:41:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.721 01:41:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.721 01:41:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.721 01:41:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.721 01:41:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.721 01:41:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.721 01:41:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.721 01:41:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.721 01:41:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.721 01:41:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.721 01:41:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.721 01:41:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.978 /dev/nbd0 00:06:08.978 01:41:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.978 01:41:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.978 01:41:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:08.978 01:41:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:08.978 01:41:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:08.978 01:41:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:08.978 01:41:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:08.978 01:41:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:08.978 01:41:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:08.978 01:41:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:08.978 01:41:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.978 1+0 records in 00:06:08.978 1+0 records out 00:06:08.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216498 s, 18.9 MB/s 00:06:08.978 01:41:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.978 01:41:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:08.978 01:41:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.978 01:41:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:08.978 01:41:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:08.978 01:41:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.978 01:41:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.979 01:41:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.236 /dev/nbd1 00:06:09.236 01:41:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.236 01:41:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.236 01:41:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:09.236 01:41:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:09.236 01:41:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:09.236 01:41:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:09.236 01:41:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:09.236 01:41:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:09.236 01:41:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:09.236 01:41:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:09.236 01:41:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.236 1+0 records in 00:06:09.236 1+0 records out 00:06:09.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222685 s, 18.4 MB/s 00:06:09.236 01:41:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.236 01:41:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:09.236 01:41:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.236 01:41:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:09.236 01:41:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:09.236 01:41:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.236 01:41:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.236 01:41:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.236 01:41:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.236 01:41:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.494 01:41:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.494 { 00:06:09.494 "nbd_device": "/dev/nbd0", 00:06:09.494 "bdev_name": "Malloc0" 00:06:09.494 }, 00:06:09.495 { 00:06:09.495 "nbd_device": "/dev/nbd1", 00:06:09.495 "bdev_name": "Malloc1" 00:06:09.495 } 00:06:09.495 ]' 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.495 { 00:06:09.495 "nbd_device": "/dev/nbd0", 00:06:09.495 "bdev_name": "Malloc0" 00:06:09.495 }, 00:06:09.495 { 00:06:09.495 "nbd_device": "/dev/nbd1", 00:06:09.495 "bdev_name": "Malloc1" 00:06:09.495 } 00:06:09.495 ]' 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.495 /dev/nbd1' 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.495 /dev/nbd1' 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.495 256+0 records in 00:06:09.495 256+0 records out 00:06:09.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00410611 s, 255 MB/s 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.495 256+0 records in 00:06:09.495 256+0 records out 00:06:09.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283796 s, 36.9 MB/s 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.495 01:41:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.753 256+0 records in 00:06:09.753 256+0 records out 00:06:09.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274234 s, 38.2 MB/s 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.753 01:41:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.010 01:41:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.010 01:41:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.010 01:41:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.010 01:41:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.010 01:41:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.010 01:41:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.010 01:41:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.010 01:41:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.011 01:41:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.011 01:41:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.268 01:41:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.268 01:41:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.268 01:41:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.268 01:41:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.268 01:41:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.268 01:41:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.268 01:41:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.268 01:41:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.268 01:41:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.268 01:41:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.268 01:41:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.526 01:41:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.526 01:41:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.526 01:41:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.526 01:41:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.526 01:41:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.526 01:41:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.526 01:41:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.526 01:41:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.526 01:41:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.526 01:41:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.526 01:41:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.526 01:41:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.526 01:41:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.784 01:41:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:11.042 [2024-07-26 01:41:52.854234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.042 [2024-07-26 01:41:52.944097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.042 [2024-07-26 01:41:52.944097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.042 [2024-07-26 01:41:53.005384] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:11.042 [2024-07-26 01:41:53.005497] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.323 01:41:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.323 01:41:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:14.323 spdk_app_start Round 1 00:06:14.323 01:41:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2140158 /var/tmp/spdk-nbd.sock 00:06:14.323 01:41:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2140158 ']' 00:06:14.323 01:41:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.323 01:41:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.323 01:41:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.323 01:41:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.323 01:41:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.323 01:41:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.323 01:41:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:14.323 01:41:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.323 Malloc0 00:06:14.323 01:41:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.580 Malloc1 00:06:14.581 01:41:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.581 01:41:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.581 01:41:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.581 01:41:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.581 01:41:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.581 01:41:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.581 01:41:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.581 01:41:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.581 01:41:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.581 01:41:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.581 01:41:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.581 01:41:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.581 01:41:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:14.581 01:41:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.581 01:41:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.581 01:41:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.839 /dev/nbd0 00:06:14.839 01:41:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.839 01:41:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.839 01:41:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:14.839 01:41:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:14.839 01:41:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:14.839 01:41:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:14.839 01:41:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:14.839 01:41:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:14.840 01:41:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:14.840 01:41:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:14.840 01:41:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.840 1+0 records in 00:06:14.840 1+0 records out 00:06:14.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000145736 s, 28.1 MB/s 00:06:14.840 01:41:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.840 01:41:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:14.840 01:41:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.840 01:41:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:14.840 01:41:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:14.840 01:41:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.840 01:41:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.840 01:41:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.097 /dev/nbd1 00:06:15.097 01:41:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.097 01:41:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.097 01:41:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:15.097 01:41:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:15.097 01:41:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.097 01:41:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.097 01:41:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:15.097 01:41:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:15.097 01:41:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.097 01:41:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.097 01:41:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.097 1+0 records in 00:06:15.097 1+0 records out 00:06:15.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239807 s, 17.1 MB/s 00:06:15.097 01:41:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.097 01:41:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:15.097 01:41:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.097 01:41:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.097 01:41:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:15.097 01:41:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.097 01:41:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.097 01:41:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.097 01:41:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.097 01:41:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.355 { 00:06:15.355 "nbd_device": "/dev/nbd0", 00:06:15.355 "bdev_name": "Malloc0" 00:06:15.355 }, 00:06:15.355 { 00:06:15.355 "nbd_device": "/dev/nbd1", 00:06:15.355 "bdev_name": "Malloc1" 00:06:15.355 } 00:06:15.355 ]' 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.355 { 00:06:15.355 "nbd_device": "/dev/nbd0", 00:06:15.355 "bdev_name": "Malloc0" 00:06:15.355 }, 00:06:15.355 { 00:06:15.355 "nbd_device": "/dev/nbd1", 00:06:15.355 "bdev_name": "Malloc1" 00:06:15.355 } 00:06:15.355 ]' 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.355 /dev/nbd1' 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.355 /dev/nbd1' 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.355 256+0 records in 00:06:15.355 256+0 records out 00:06:15.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00380197 s, 276 MB/s 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.355 256+0 records in 00:06:15.355 256+0 records out 00:06:15.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207994 s, 50.4 MB/s 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.355 256+0 records in 00:06:15.355 256+0 records out 00:06:15.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246297 s, 42.6 MB/s 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.355 01:41:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.612 01:41:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.612 01:41:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.612 01:41:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.612 01:41:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.612 01:41:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.612 01:41:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.612 01:41:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.612 01:41:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.612 01:41:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.612 01:41:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.869 01:41:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.869 01:41:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.869 01:41:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.869 01:41:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.869 01:41:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.869 01:41:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.126 01:41:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.126 01:41:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.126 01:41:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.126 01:41:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.126 01:41:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.126 01:41:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.126 01:41:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.126 01:41:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.384 01:41:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.384 01:41:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.384 01:41:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.384 01:41:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.384 01:41:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.384 01:41:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.384 01:41:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.384 01:41:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.384 01:41:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.384 01:41:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.643 01:41:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.901 [2024-07-26 01:41:58.676069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.901 [2024-07-26 01:41:58.766040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.901 [2024-07-26 01:41:58.766043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.901 [2024-07-26 01:41:58.828504] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.901 [2024-07-26 01:41:58.828612] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.176 01:42:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.176 01:42:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:20.176 spdk_app_start Round 2 00:06:20.176 01:42:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2140158 /var/tmp/spdk-nbd.sock 00:06:20.176 01:42:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2140158 ']' 00:06:20.176 01:42:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.176 01:42:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.176 01:42:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.176 01:42:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.176 01:42:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.176 01:42:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.176 01:42:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:20.176 01:42:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.176 Malloc0 00:06:20.176 01:42:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.434 Malloc1 00:06:20.434 01:42:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.434 01:42:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.434 01:42:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.434 01:42:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.434 01:42:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.434 01:42:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.434 01:42:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.434 01:42:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.434 01:42:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.434 01:42:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.434 01:42:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.434 01:42:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.434 01:42:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.434 01:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.434 01:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.434 01:42:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.690 /dev/nbd0 00:06:20.690 01:42:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.690 01:42:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.690 01:42:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:20.690 01:42:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:20.691 01:42:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:20.691 01:42:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:20.691 01:42:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:20.691 01:42:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:20.691 01:42:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:20.691 01:42:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:20.691 01:42:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.691 1+0 records in 00:06:20.691 1+0 records out 00:06:20.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00015554 s, 26.3 MB/s 00:06:20.691 01:42:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.691 01:42:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:20.691 01:42:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.691 01:42:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:20.691 01:42:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:20.691 01:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.691 01:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.691 01:42:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:20.948 /dev/nbd1 00:06:20.948 01:42:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.948 01:42:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.948 01:42:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:20.948 01:42:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:20.948 01:42:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:20.948 01:42:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:20.948 01:42:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:20.948 01:42:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:20.948 01:42:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:20.948 01:42:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:20.948 01:42:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.948 1+0 records in 00:06:20.948 1+0 records out 00:06:20.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255046 s, 16.1 MB/s 00:06:20.948 01:42:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.948 01:42:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:20.948 01:42:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.948 01:42:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:20.948 01:42:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:20.948 01:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.948 01:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.948 01:42:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.948 01:42:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.948 01:42:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.205 01:42:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.205 { 00:06:21.205 "nbd_device": "/dev/nbd0", 00:06:21.205 "bdev_name": "Malloc0" 00:06:21.205 }, 00:06:21.205 { 00:06:21.205 "nbd_device": "/dev/nbd1", 00:06:21.205 "bdev_name": "Malloc1" 00:06:21.205 } 00:06:21.205 ]' 00:06:21.205 01:42:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.205 { 00:06:21.205 "nbd_device": "/dev/nbd0", 00:06:21.205 "bdev_name": "Malloc0" 00:06:21.205 }, 00:06:21.206 { 00:06:21.206 "nbd_device": "/dev/nbd1", 00:06:21.206 "bdev_name": "Malloc1" 00:06:21.206 } 00:06:21.206 ]' 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.206 /dev/nbd1' 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.206 /dev/nbd1' 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.206 256+0 records in 00:06:21.206 256+0 records out 00:06:21.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500276 s, 210 MB/s 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.206 256+0 records in 00:06:21.206 256+0 records out 00:06:21.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208977 s, 50.2 MB/s 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.206 01:42:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.463 256+0 records in 00:06:21.463 256+0 records out 00:06:21.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251444 s, 41.7 MB/s 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.463 01:42:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.720 01:42:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.720 01:42:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.720 01:42:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.720 01:42:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.720 01:42:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.720 01:42:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.720 01:42:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.720 01:42:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.720 01:42:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.720 01:42:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.977 01:42:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.977 01:42:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.977 01:42:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.977 01:42:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.977 01:42:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.977 01:42:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.977 01:42:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.977 01:42:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.977 01:42:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.977 01:42:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.977 01:42:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.234 01:42:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.234 01:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.234 01:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.234 01:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.234 01:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.234 01:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.234 01:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:22.234 01:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.234 01:42:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.234 01:42:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.234 01:42:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.234 01:42:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.234 01:42:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.492 01:42:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.750 [2024-07-26 01:42:04.620273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.750 [2024-07-26 01:42:04.710095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.750 [2024-07-26 01:42:04.710100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.041 [2024-07-26 01:42:04.771793] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.041 [2024-07-26 01:42:04.771861] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.567 01:42:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2140158 /var/tmp/spdk-nbd.sock 00:06:25.567 01:42:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2140158 ']' 00:06:25.567 01:42:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.567 01:42:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.567 01:42:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.567 01:42:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.567 01:42:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.824 01:42:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.824 01:42:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:25.824 01:42:07 event.app_repeat -- event/event.sh@39 -- # killprocess 2140158 00:06:25.824 01:42:07 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2140158 ']' 00:06:25.824 01:42:07 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2140158 00:06:25.824 01:42:07 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:25.824 01:42:07 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.824 01:42:07 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2140158 00:06:25.824 01:42:07 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.824 01:42:07 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.824 01:42:07 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2140158' 00:06:25.824 killing process with pid 2140158 00:06:25.824 01:42:07 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2140158 00:06:25.824 01:42:07 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2140158 00:06:26.082 spdk_app_start is called in Round 0. 00:06:26.082 Shutdown signal received, stop current app iteration 00:06:26.082 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 reinitialization... 00:06:26.082 spdk_app_start is called in Round 1. 00:06:26.082 Shutdown signal received, stop current app iteration 00:06:26.082 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 reinitialization... 00:06:26.082 spdk_app_start is called in Round 2. 00:06:26.082 Shutdown signal received, stop current app iteration 00:06:26.082 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 reinitialization... 00:06:26.082 spdk_app_start is called in Round 3. 00:06:26.082 Shutdown signal received, stop current app iteration 00:06:26.082 01:42:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:26.082 01:42:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:26.082 00:06:26.082 real 0m18.080s 00:06:26.082 user 0m39.426s 00:06:26.082 sys 0m3.290s 00:06:26.082 01:42:07 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.082 01:42:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.082 ************************************ 00:06:26.082 END TEST app_repeat 00:06:26.082 ************************************ 00:06:26.082 01:42:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:26.082 01:42:07 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:26.082 01:42:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.082 01:42:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.082 01:42:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.082 ************************************ 00:06:26.082 START TEST cpu_locks 00:06:26.082 ************************************ 00:06:26.082 01:42:07 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:26.082 * Looking for test storage... 00:06:26.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:26.082 01:42:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:26.082 01:42:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:26.082 01:42:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:26.082 01:42:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:26.082 01:42:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.082 01:42:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.082 01:42:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.082 ************************************ 00:06:26.082 START TEST default_locks 00:06:26.082 ************************************ 00:06:26.082 01:42:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:26.082 01:42:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2142941 00:06:26.082 01:42:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.082 01:42:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2142941 00:06:26.082 01:42:07 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2142941 ']' 00:06:26.082 01:42:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.082 01:42:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.082 01:42:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.082 01:42:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.082 01:42:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.082 [2024-07-26 01:42:08.041458] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:26.082 [2024-07-26 01:42:08.041541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142941 ] 00:06:26.082 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.340 [2024-07-26 01:42:08.100328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.340 [2024-07-26 01:42:08.183650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.598 01:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.598 01:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:26.598 01:42:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2142941 00:06:26.598 01:42:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2142941 00:06:26.598 01:42:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.856 lslocks: write error 00:06:26.856 01:42:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2142941 00:06:26.856 01:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2142941 ']' 00:06:26.856 01:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2142941 00:06:26.856 01:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:26.856 01:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.856 01:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2142941 00:06:26.856 01:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.856 01:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.856 01:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2142941' 00:06:26.856 killing process with pid 2142941 00:06:26.856 01:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2142941 00:06:26.856 01:42:08 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2142941 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2142941 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2142941 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2142941 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2142941 ']' 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2142941) - No such process 00:06:27.169 ERROR: process (pid: 2142941) is no longer running 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.169 00:06:27.169 real 0m1.173s 00:06:27.169 user 0m1.109s 00:06:27.169 sys 0m0.526s 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.169 01:42:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.169 ************************************ 00:06:27.169 END TEST default_locks 00:06:27.169 ************************************ 00:06:27.428 01:42:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:27.428 01:42:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.428 01:42:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.428 01:42:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.428 ************************************ 00:06:27.428 START TEST default_locks_via_rpc 00:06:27.428 ************************************ 00:06:27.428 01:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:27.428 01:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2143291 00:06:27.428 01:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.428 01:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2143291 00:06:27.428 01:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2143291 ']' 00:06:27.428 01:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.428 01:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.428 01:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.428 01:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.428 01:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.428 [2024-07-26 01:42:09.266427] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:27.428 [2024-07-26 01:42:09.266520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143291 ] 00:06:27.428 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.428 [2024-07-26 01:42:09.323682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.428 [2024-07-26 01:42:09.412455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2143291 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2143291 00:06:27.686 01:42:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.251 01:42:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2143291 00:06:28.251 01:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2143291 ']' 00:06:28.251 01:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2143291 00:06:28.251 01:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:28.251 01:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.251 01:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2143291 00:06:28.251 01:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.251 01:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.251 01:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2143291' 00:06:28.251 killing process with pid 2143291 00:06:28.251 01:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2143291 00:06:28.251 01:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2143291 00:06:28.509 00:06:28.509 real 0m1.252s 00:06:28.509 user 0m1.213s 00:06:28.509 sys 0m0.526s 00:06:28.509 01:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.509 01:42:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.509 ************************************ 00:06:28.509 END TEST default_locks_via_rpc 00:06:28.509 ************************************ 00:06:28.509 01:42:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:28.509 01:42:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.509 01:42:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.509 01:42:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.509 ************************************ 00:06:28.509 START TEST non_locking_app_on_locked_coremask 00:06:28.509 ************************************ 00:06:28.509 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:28.509 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2143499 00:06:28.509 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.509 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2143499 /var/tmp/spdk.sock 00:06:28.509 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2143499 ']' 00:06:28.509 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.509 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.509 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.509 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.509 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.767 [2024-07-26 01:42:10.570079] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:28.768 [2024-07-26 01:42:10.570158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143499 ] 00:06:28.768 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.768 [2024-07-26 01:42:10.634499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.768 [2024-07-26 01:42:10.720835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.025 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.025 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:29.025 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2143585 00:06:29.025 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:29.025 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2143585 /var/tmp/spdk2.sock 00:06:29.025 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2143585 ']' 00:06:29.025 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.025 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.025 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.025 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.025 01:42:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.025 [2024-07-26 01:42:11.023677] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:29.026 [2024-07-26 01:42:11.023755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143585 ] 00:06:29.283 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.283 [2024-07-26 01:42:11.120212] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.283 [2024-07-26 01:42:11.120257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.542 [2024-07-26 01:42:11.298752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.110 01:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.110 01:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:30.110 01:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2143499 00:06:30.110 01:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2143499 00:06:30.110 01:42:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.677 lslocks: write error 00:06:30.677 01:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2143499 00:06:30.677 01:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2143499 ']' 00:06:30.677 01:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2143499 00:06:30.677 01:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:30.677 01:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.677 01:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2143499 00:06:30.677 01:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.677 01:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.677 01:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2143499' 00:06:30.677 killing process with pid 2143499 00:06:30.677 01:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2143499 00:06:30.677 01:42:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2143499 00:06:31.615 01:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2143585 00:06:31.615 01:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2143585 ']' 00:06:31.615 01:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2143585 00:06:31.615 01:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:31.615 01:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.615 01:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2143585 00:06:31.615 01:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.615 01:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.615 01:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2143585' 00:06:31.615 killing process with pid 2143585 00:06:31.615 01:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2143585 00:06:31.615 01:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2143585 00:06:31.874 00:06:31.874 real 0m3.210s 00:06:31.874 user 0m3.361s 00:06:31.874 sys 0m1.072s 00:06:31.874 01:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.874 01:42:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.874 ************************************ 00:06:31.874 END TEST non_locking_app_on_locked_coremask 00:06:31.874 ************************************ 00:06:31.874 01:42:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:31.874 01:42:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.874 01:42:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.874 01:42:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.874 ************************************ 00:06:31.874 START TEST locking_app_on_unlocked_coremask 00:06:31.874 ************************************ 00:06:31.874 01:42:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:31.874 01:42:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2143897 00:06:31.874 01:42:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:31.874 01:42:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2143897 /var/tmp/spdk.sock 00:06:31.874 01:42:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2143897 ']' 00:06:31.874 01:42:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.874 01:42:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.874 01:42:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.874 01:42:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.874 01:42:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.874 [2024-07-26 01:42:13.827605] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:31.874 [2024-07-26 01:42:13.827685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143897 ] 00:06:31.874 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.133 [2024-07-26 01:42:13.894550] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.133 [2024-07-26 01:42:13.894586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.133 [2024-07-26 01:42:13.987835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.391 01:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.391 01:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:32.391 01:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2144020 00:06:32.391 01:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:32.391 01:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2144020 /var/tmp/spdk2.sock 00:06:32.391 01:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2144020 ']' 00:06:32.391 01:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.391 01:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.391 01:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.391 01:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.391 01:42:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.391 [2024-07-26 01:42:14.291743] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:32.391 [2024-07-26 01:42:14.291825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144020 ] 00:06:32.391 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.391 [2024-07-26 01:42:14.387675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.649 [2024-07-26 01:42:14.572501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.584 01:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.584 01:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:33.584 01:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2144020 00:06:33.584 01:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2144020 00:06:33.584 01:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.842 lslocks: write error 00:06:33.842 01:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2143897 00:06:33.842 01:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2143897 ']' 00:06:33.842 01:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2143897 00:06:33.842 01:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:33.842 01:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.842 01:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2143897 00:06:33.842 01:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.842 01:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.842 01:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2143897' 00:06:33.842 killing process with pid 2143897 00:06:33.842 01:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2143897 00:06:33.842 01:42:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2143897 00:06:34.780 01:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2144020 00:06:34.780 01:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2144020 ']' 00:06:34.780 01:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2144020 00:06:34.780 01:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:34.780 01:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.780 01:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2144020 00:06:34.780 01:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.780 01:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.780 01:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2144020' 00:06:34.780 killing process with pid 2144020 00:06:34.780 01:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2144020 00:06:34.780 01:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2144020 00:06:35.039 00:06:35.039 real 0m3.180s 00:06:35.039 user 0m3.326s 00:06:35.039 sys 0m1.067s 00:06:35.039 01:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.039 01:42:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.039 ************************************ 00:06:35.039 END TEST locking_app_on_unlocked_coremask 00:06:35.039 ************************************ 00:06:35.039 01:42:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:35.039 01:42:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.039 01:42:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.039 01:42:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.039 ************************************ 00:06:35.039 START TEST locking_app_on_locked_coremask 00:06:35.039 ************************************ 00:06:35.039 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:35.039 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2144330 00:06:35.039 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.039 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2144330 /var/tmp/spdk.sock 00:06:35.039 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2144330 ']' 00:06:35.039 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.039 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.039 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.039 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.039 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.297 [2024-07-26 01:42:17.058097] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:35.297 [2024-07-26 01:42:17.058202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144330 ] 00:06:35.297 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.297 [2024-07-26 01:42:17.115857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.298 [2024-07-26 01:42:17.204369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2144453 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2144453 /var/tmp/spdk2.sock 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2144453 /var/tmp/spdk2.sock 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2144453 /var/tmp/spdk2.sock 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2144453 ']' 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.556 01:42:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.556 [2024-07-26 01:42:17.496140] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:35.556 [2024-07-26 01:42:17.496234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144453 ] 00:06:35.556 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.816 [2024-07-26 01:42:17.588370] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2144330 has claimed it. 00:06:35.816 [2024-07-26 01:42:17.588420] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2144453) - No such process 00:06:36.383 ERROR: process (pid: 2144453) is no longer running 00:06:36.383 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.383 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:36.383 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:36.383 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:36.383 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:36.383 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:36.383 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2144330 00:06:36.383 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2144330 00:06:36.383 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.642 lslocks: write error 00:06:36.642 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2144330 00:06:36.642 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2144330 ']' 00:06:36.642 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2144330 00:06:36.642 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:36.642 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.642 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2144330 00:06:36.642 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.642 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.642 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2144330' 00:06:36.642 killing process with pid 2144330 00:06:36.642 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2144330 00:06:36.642 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2144330 00:06:37.211 00:06:37.211 real 0m1.917s 00:06:37.211 user 0m2.079s 00:06:37.211 sys 0m0.615s 00:06:37.211 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.211 01:42:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.211 ************************************ 00:06:37.211 END TEST locking_app_on_locked_coremask 00:06:37.211 ************************************ 00:06:37.211 01:42:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:37.211 01:42:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.211 01:42:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.211 01:42:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.211 ************************************ 00:06:37.211 START TEST locking_overlapped_coremask 00:06:37.211 ************************************ 00:06:37.211 01:42:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:37.211 01:42:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2144624 00:06:37.211 01:42:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:37.211 01:42:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2144624 /var/tmp/spdk.sock 00:06:37.211 01:42:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2144624 ']' 00:06:37.211 01:42:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.211 01:42:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.211 01:42:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.211 01:42:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.211 01:42:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.211 [2024-07-26 01:42:19.017131] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:37.211 [2024-07-26 01:42:19.017237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144624 ] 00:06:37.211 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.211 [2024-07-26 01:42:19.081266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.211 [2024-07-26 01:42:19.179674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.211 [2024-07-26 01:42:19.179738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.211 [2024-07-26 01:42:19.179742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2144634 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2144634 /var/tmp/spdk2.sock 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2144634 /var/tmp/spdk2.sock 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2144634 /var/tmp/spdk2.sock 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2144634 ']' 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.486 01:42:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.486 [2024-07-26 01:42:19.474093] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:37.486 [2024-07-26 01:42:19.474187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144634 ] 00:06:37.749 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.749 [2024-07-26 01:42:19.567536] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2144624 has claimed it. 00:06:37.749 [2024-07-26 01:42:19.567600] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:38.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2144634) - No such process 00:06:38.320 ERROR: process (pid: 2144634) is no longer running 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2144624 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2144624 ']' 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2144624 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2144624 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2144624' 00:06:38.320 killing process with pid 2144624 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2144624 00:06:38.320 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2144624 00:06:38.889 00:06:38.889 real 0m1.645s 00:06:38.889 user 0m4.472s 00:06:38.889 sys 0m0.458s 00:06:38.889 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.889 01:42:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.889 ************************************ 00:06:38.889 END TEST locking_overlapped_coremask 00:06:38.889 ************************************ 00:06:38.889 01:42:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:38.889 01:42:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.889 01:42:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.889 01:42:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.889 ************************************ 00:06:38.889 START TEST locking_overlapped_coremask_via_rpc 00:06:38.889 ************************************ 00:06:38.889 01:42:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:38.889 01:42:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2144856 00:06:38.889 01:42:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:38.889 01:42:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2144856 /var/tmp/spdk.sock 00:06:38.889 01:42:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2144856 ']' 00:06:38.889 01:42:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.889 01:42:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.889 01:42:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.889 01:42:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.889 01:42:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.889 [2024-07-26 01:42:20.707468] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:38.889 [2024-07-26 01:42:20.707564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144856 ] 00:06:38.889 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.889 [2024-07-26 01:42:20.766919] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.889 [2024-07-26 01:42:20.766959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.889 [2024-07-26 01:42:20.854261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.889 [2024-07-26 01:42:20.854282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.889 [2024-07-26 01:42:20.854284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.148 01:42:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.148 01:42:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:39.148 01:42:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2144929 00:06:39.148 01:42:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2144929 /var/tmp/spdk2.sock 00:06:39.148 01:42:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2144929 ']' 00:06:39.148 01:42:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.148 01:42:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:39.148 01:42:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.148 01:42:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.148 01:42:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.148 01:42:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.432 [2024-07-26 01:42:21.166285] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:39.432 [2024-07-26 01:42:21.166372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144929 ] 00:06:39.432 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.432 [2024-07-26 01:42:21.252125] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.432 [2024-07-26 01:42:21.252158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.432 [2024-07-26 01:42:21.428095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.702 [2024-07-26 01:42:21.432150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:39.702 [2024-07-26 01:42:21.432152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.268 [2024-07-26 01:42:22.110167] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2144856 has claimed it. 00:06:40.268 request: 00:06:40.268 { 00:06:40.268 "method": "framework_enable_cpumask_locks", 00:06:40.268 "req_id": 1 00:06:40.268 } 00:06:40.268 Got JSON-RPC error response 00:06:40.268 response: 00:06:40.268 { 00:06:40.268 "code": -32603, 00:06:40.268 "message": "Failed to claim CPU core: 2" 00:06:40.268 } 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2144856 /var/tmp/spdk.sock 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2144856 ']' 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.268 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.526 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.526 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:40.526 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2144929 /var/tmp/spdk2.sock 00:06:40.526 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2144929 ']' 00:06:40.526 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.526 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.526 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.527 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.527 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.786 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.786 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:40.786 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:40.786 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:40.786 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:40.786 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:40.786 00:06:40.786 real 0m1.950s 00:06:40.786 user 0m1.006s 00:06:40.786 sys 0m0.178s 00:06:40.786 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.786 01:42:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.786 ************************************ 00:06:40.786 END TEST locking_overlapped_coremask_via_rpc 00:06:40.786 ************************************ 00:06:40.786 01:42:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:40.786 01:42:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2144856 ]] 00:06:40.786 01:42:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2144856 00:06:40.786 01:42:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2144856 ']' 00:06:40.786 01:42:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2144856 00:06:40.786 01:42:22 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:40.786 01:42:22 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.786 01:42:22 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2144856 00:06:40.786 01:42:22 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.786 01:42:22 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.786 01:42:22 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2144856' 00:06:40.786 killing process with pid 2144856 00:06:40.786 01:42:22 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2144856 00:06:40.786 01:42:22 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2144856 00:06:41.044 01:42:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2144929 ]] 00:06:41.044 01:42:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2144929 00:06:41.044 01:42:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2144929 ']' 00:06:41.045 01:42:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2144929 00:06:41.045 01:42:23 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:41.045 01:42:23 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.045 01:42:23 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2144929 00:06:41.304 01:42:23 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:41.304 01:42:23 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:41.304 01:42:23 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2144929' 00:06:41.304 killing process with pid 2144929 00:06:41.304 01:42:23 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2144929 00:06:41.304 01:42:23 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2144929 00:06:41.570 01:42:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.570 01:42:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:41.570 01:42:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2144856 ]] 00:06:41.570 01:42:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2144856 00:06:41.570 01:42:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2144856 ']' 00:06:41.570 01:42:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2144856 00:06:41.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2144856) - No such process 00:06:41.570 01:42:23 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2144856 is not found' 00:06:41.570 Process with pid 2144856 is not found 00:06:41.570 01:42:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2144929 ]] 00:06:41.570 01:42:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2144929 00:06:41.570 01:42:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2144929 ']' 00:06:41.570 01:42:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2144929 00:06:41.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2144929) - No such process 00:06:41.570 01:42:23 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2144929 is not found' 00:06:41.570 Process with pid 2144929 is not found 00:06:41.570 01:42:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.570 00:06:41.570 real 0m15.552s 00:06:41.570 user 0m27.163s 00:06:41.570 sys 0m5.310s 00:06:41.570 01:42:23 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.570 01:42:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.570 ************************************ 00:06:41.570 END TEST cpu_locks 00:06:41.570 ************************************ 00:06:41.570 00:06:41.570 real 0m39.467s 00:06:41.570 user 1m15.445s 00:06:41.570 sys 0m9.439s 00:06:41.570 01:42:23 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.570 01:42:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.570 ************************************ 00:06:41.570 END TEST event 00:06:41.570 ************************************ 00:06:41.570 01:42:23 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:41.570 01:42:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.570 01:42:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.570 01:42:23 -- common/autotest_common.sh@10 -- # set +x 00:06:41.570 ************************************ 00:06:41.570 START TEST thread 00:06:41.570 ************************************ 00:06:41.570 01:42:23 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:41.829 * Looking for test storage... 00:06:41.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:41.829 01:42:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:41.829 01:42:23 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:41.829 01:42:23 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.829 01:42:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.829 ************************************ 00:06:41.830 START TEST thread_poller_perf 00:06:41.830 ************************************ 00:06:41.830 01:42:23 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:41.830 [2024-07-26 01:42:23.627747] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:41.830 [2024-07-26 01:42:23.627809] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145308 ] 00:06:41.830 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.830 [2024-07-26 01:42:23.687228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.830 [2024-07-26 01:42:23.774787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.830 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:43.206 ====================================== 00:06:43.206 busy:2715565210 (cyc) 00:06:43.206 total_run_count: 294000 00:06:43.206 tsc_hz: 2700000000 (cyc) 00:06:43.206 ====================================== 00:06:43.206 poller_cost: 9236 (cyc), 3420 (nsec) 00:06:43.206 00:06:43.206 real 0m1.252s 00:06:43.206 user 0m1.168s 00:06:43.206 sys 0m0.079s 00:06:43.206 01:42:24 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.206 01:42:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.206 ************************************ 00:06:43.206 END TEST thread_poller_perf 00:06:43.206 ************************************ 00:06:43.206 01:42:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.206 01:42:24 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:43.206 01:42:24 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.206 01:42:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.206 ************************************ 00:06:43.206 START TEST thread_poller_perf 00:06:43.206 ************************************ 00:06:43.206 01:42:24 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.206 [2024-07-26 01:42:24.929875] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:43.206 [2024-07-26 01:42:24.929944] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145467 ] 00:06:43.206 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.206 [2024-07-26 01:42:24.992595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.206 [2024-07-26 01:42:25.083176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.206 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:44.586 ====================================== 00:06:44.586 busy:2702495186 (cyc) 00:06:44.586 total_run_count: 3860000 00:06:44.586 tsc_hz: 2700000000 (cyc) 00:06:44.586 ====================================== 00:06:44.586 poller_cost: 700 (cyc), 259 (nsec) 00:06:44.586 00:06:44.586 real 0m1.248s 00:06:44.586 user 0m1.156s 00:06:44.586 sys 0m0.086s 00:06:44.586 01:42:26 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.586 01:42:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.586 ************************************ 00:06:44.586 END TEST thread_poller_perf 00:06:44.586 ************************************ 00:06:44.586 01:42:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:44.586 00:06:44.586 real 0m2.646s 00:06:44.586 user 0m2.379s 00:06:44.586 sys 0m0.265s 00:06:44.586 01:42:26 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.586 01:42:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.586 ************************************ 00:06:44.586 END TEST thread 00:06:44.586 ************************************ 00:06:44.586 01:42:26 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:44.586 01:42:26 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:44.586 01:42:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.586 01:42:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.586 01:42:26 -- common/autotest_common.sh@10 -- # set +x 00:06:44.586 ************************************ 00:06:44.586 START TEST app_cmdline 00:06:44.586 ************************************ 00:06:44.586 01:42:26 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:44.586 * Looking for test storage... 00:06:44.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:44.586 01:42:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:44.586 01:42:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2145672 00:06:44.586 01:42:26 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:44.586 01:42:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2145672 00:06:44.586 01:42:26 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2145672 ']' 00:06:44.586 01:42:26 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.586 01:42:26 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.586 01:42:26 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.586 01:42:26 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.586 01:42:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:44.586 [2024-07-26 01:42:26.334320] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:44.586 [2024-07-26 01:42:26.334412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145672 ] 00:06:44.586 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.586 [2024-07-26 01:42:26.396600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.586 [2024-07-26 01:42:26.482623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.845 01:42:26 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.845 01:42:26 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:44.845 01:42:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:45.103 { 00:06:45.103 "version": "SPDK v24.09-pre git sha1 704257090", 00:06:45.103 "fields": { 00:06:45.103 "major": 24, 00:06:45.103 "minor": 9, 00:06:45.103 "patch": 0, 00:06:45.103 "suffix": "-pre", 00:06:45.103 "commit": "704257090" 00:06:45.103 } 00:06:45.103 } 00:06:45.103 01:42:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:45.103 01:42:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:45.103 01:42:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:45.103 01:42:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:45.103 01:42:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:45.103 01:42:26 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.103 01:42:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:45.103 01:42:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.103 01:42:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:45.103 01:42:26 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.103 01:42:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:45.103 01:42:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:45.103 01:42:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.103 01:42:27 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:45.103 01:42:27 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.103 01:42:27 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.103 01:42:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.103 01:42:27 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.103 01:42:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.103 01:42:27 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.103 01:42:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.103 01:42:27 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.103 01:42:27 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:45.103 01:42:27 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.361 request: 00:06:45.361 { 00:06:45.361 "method": "env_dpdk_get_mem_stats", 00:06:45.361 "req_id": 1 00:06:45.361 } 00:06:45.361 Got JSON-RPC error response 00:06:45.361 response: 00:06:45.361 { 00:06:45.361 "code": -32601, 00:06:45.361 "message": "Method not found" 00:06:45.361 } 00:06:45.361 01:42:27 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:45.361 01:42:27 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.361 01:42:27 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.361 01:42:27 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.361 01:42:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2145672 00:06:45.361 01:42:27 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2145672 ']' 00:06:45.361 01:42:27 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2145672 00:06:45.361 01:42:27 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:45.361 01:42:27 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.361 01:42:27 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2145672 00:06:45.361 01:42:27 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.361 01:42:27 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.361 01:42:27 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2145672' 00:06:45.361 killing process with pid 2145672 00:06:45.361 01:42:27 app_cmdline -- common/autotest_common.sh@969 -- # kill 2145672 00:06:45.361 01:42:27 app_cmdline -- common/autotest_common.sh@974 -- # wait 2145672 00:06:45.927 00:06:45.927 real 0m1.449s 00:06:45.927 user 0m1.732s 00:06:45.927 sys 0m0.485s 00:06:45.927 01:42:27 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.927 01:42:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.927 ************************************ 00:06:45.927 END TEST app_cmdline 00:06:45.927 ************************************ 00:06:45.927 01:42:27 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:45.927 01:42:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.927 01:42:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.927 01:42:27 -- common/autotest_common.sh@10 -- # set +x 00:06:45.927 ************************************ 00:06:45.927 START TEST version 00:06:45.927 ************************************ 00:06:45.927 01:42:27 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:45.927 * Looking for test storage... 00:06:45.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:45.927 01:42:27 version -- app/version.sh@17 -- # get_header_version major 00:06:45.927 01:42:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:45.927 01:42:27 version -- app/version.sh@14 -- # cut -f2 00:06:45.927 01:42:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.927 01:42:27 version -- app/version.sh@17 -- # major=24 00:06:45.927 01:42:27 version -- app/version.sh@18 -- # get_header_version minor 00:06:45.927 01:42:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:45.927 01:42:27 version -- app/version.sh@14 -- # cut -f2 00:06:45.927 01:42:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.927 01:42:27 version -- app/version.sh@18 -- # minor=9 00:06:45.927 01:42:27 version -- app/version.sh@19 -- # get_header_version patch 00:06:45.927 01:42:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:45.927 01:42:27 version -- app/version.sh@14 -- # cut -f2 00:06:45.927 01:42:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.927 01:42:27 version -- app/version.sh@19 -- # patch=0 00:06:45.927 01:42:27 version -- app/version.sh@20 -- # get_header_version suffix 00:06:45.927 01:42:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:45.927 01:42:27 version -- app/version.sh@14 -- # cut -f2 00:06:45.927 01:42:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.927 01:42:27 version -- app/version.sh@20 -- # suffix=-pre 00:06:45.927 01:42:27 version -- app/version.sh@22 -- # version=24.9 00:06:45.927 01:42:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:45.927 01:42:27 version -- app/version.sh@28 -- # version=24.9rc0 00:06:45.927 01:42:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:45.927 01:42:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:45.927 01:42:27 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:45.927 01:42:27 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:45.927 00:06:45.927 real 0m0.103s 00:06:45.927 user 0m0.051s 00:06:45.927 sys 0m0.074s 00:06:45.927 01:42:27 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.927 01:42:27 version -- common/autotest_common.sh@10 -- # set +x 00:06:45.927 ************************************ 00:06:45.927 END TEST version 00:06:45.927 ************************************ 00:06:45.927 01:42:27 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:45.927 01:42:27 -- spdk/autotest.sh@202 -- # uname -s 00:06:45.927 01:42:27 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:45.927 01:42:27 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:45.927 01:42:27 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:45.927 01:42:27 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:06:45.927 01:42:27 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:45.927 01:42:27 -- spdk/autotest.sh@264 -- # timing_exit lib 00:06:45.927 01:42:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:45.927 01:42:27 -- common/autotest_common.sh@10 -- # set +x 00:06:45.927 01:42:27 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:45.927 01:42:27 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:45.927 01:42:27 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:06:45.927 01:42:27 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:06:45.927 01:42:27 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:06:45.927 01:42:27 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:06:45.927 01:42:27 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:45.927 01:42:27 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:45.927 01:42:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.927 01:42:27 -- common/autotest_common.sh@10 -- # set +x 00:06:45.927 ************************************ 00:06:45.927 START TEST nvmf_tcp 00:06:45.927 ************************************ 00:06:45.927 01:42:27 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:46.187 * Looking for test storage... 00:06:46.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:46.187 01:42:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:46.187 01:42:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:46.187 01:42:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:46.187 01:42:27 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:46.187 01:42:27 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.187 01:42:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.187 ************************************ 00:06:46.187 START TEST nvmf_target_core 00:06:46.187 ************************************ 00:06:46.187 01:42:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:46.187 * Looking for test storage... 00:06:46.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:46.187 ************************************ 00:06:46.187 START TEST nvmf_abort 00:06:46.187 ************************************ 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:46.187 * Looking for test storage... 00:06:46.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:46.187 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:46.188 01:42:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:48.091 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:48.091 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.091 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:48.091 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:48.092 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:48.092 01:42:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:48.092 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:48.092 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:48.092 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:48.092 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:48.092 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:48.092 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:48.092 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:48.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:06:48.092 00:06:48.092 --- 10.0.0.2 ping statistics --- 00:06:48.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.092 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:06:48.092 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:48.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:06:48.092 00:06:48.092 --- 10.0.0.1 ping statistics --- 00:06:48.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.092 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2147709 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2147709 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2147709 ']' 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.351 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.351 [2024-07-26 01:42:30.181725] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:48.352 [2024-07-26 01:42:30.181817] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.352 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.352 [2024-07-26 01:42:30.247231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.352 [2024-07-26 01:42:30.337485] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.352 [2024-07-26 01:42:30.337547] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.352 [2024-07-26 01:42:30.337560] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.352 [2024-07-26 01:42:30.337571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.352 [2024-07-26 01:42:30.337580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.352 [2024-07-26 01:42:30.337669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.352 [2024-07-26 01:42:30.337728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.352 [2024-07-26 01:42:30.337730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.611 [2024-07-26 01:42:30.483013] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.611 Malloc0 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.611 Delay0 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.611 [2024-07-26 01:42:30.552442] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.611 01:42:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:48.611 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.869 [2024-07-26 01:42:30.659106] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:50.772 Initializing NVMe Controllers 00:06:50.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:50.773 controller IO queue size 128 less than required 00:06:50.773 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:50.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:50.773 Initialization complete. Launching workers. 00:06:50.773 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32971 00:06:50.773 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33032, failed to submit 62 00:06:50.773 success 32975, unsuccess 57, failed 0 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:50.773 rmmod nvme_tcp 00:06:50.773 rmmod nvme_fabrics 00:06:50.773 rmmod nvme_keyring 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2147709 ']' 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2147709 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2147709 ']' 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2147709 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.773 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2147709 00:06:51.030 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:51.030 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:51.030 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2147709' 00:06:51.030 killing process with pid 2147709 00:06:51.030 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2147709 00:06:51.030 01:42:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2147709 00:06:51.290 01:42:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:51.290 01:42:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:51.290 01:42:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:51.290 01:42:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:51.290 01:42:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:51.290 01:42:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.290 01:42:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:51.290 01:42:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.207 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:53.207 00:06:53.207 real 0m7.029s 00:06:53.207 user 0m10.105s 00:06:53.207 sys 0m2.450s 00:06:53.207 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.207 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.207 ************************************ 00:06:53.207 END TEST nvmf_abort 00:06:53.207 ************************************ 00:06:53.207 01:42:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:53.207 01:42:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:53.207 01:42:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.207 01:42:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:53.207 ************************************ 00:06:53.207 START TEST nvmf_ns_hotplug_stress 00:06:53.207 ************************************ 00:06:53.207 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:53.207 * Looking for test storage... 00:06:53.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.207 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.207 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:53.208 01:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:55.115 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:55.115 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:55.115 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.115 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:55.115 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:55.116 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:55.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:06:55.375 00:06:55.375 --- 10.0.0.2 ping statistics --- 00:06:55.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.375 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:55.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:06:55.375 00:06:55.375 --- 10.0.0.1 ping statistics --- 00:06:55.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.375 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2149931 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2149931 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2149931 ']' 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.375 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:55.375 [2024-07-26 01:42:37.302355] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:06:55.375 [2024-07-26 01:42:37.302444] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.375 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.375 [2024-07-26 01:42:37.372153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.633 [2024-07-26 01:42:37.461859] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.633 [2024-07-26 01:42:37.461922] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.633 [2024-07-26 01:42:37.461939] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.633 [2024-07-26 01:42:37.461953] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.633 [2024-07-26 01:42:37.461965] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.633 [2024-07-26 01:42:37.462097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.633 [2024-07-26 01:42:37.462177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.633 [2024-07-26 01:42:37.462180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.633 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.633 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:55.633 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:55.633 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:55.633 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:55.633 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.633 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:55.633 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:55.891 [2024-07-26 01:42:37.834990] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.891 01:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:56.150 01:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:56.408 [2024-07-26 01:42:38.343706] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.408 01:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:56.665 01:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:56.923 Malloc0 00:06:56.923 01:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:57.180 Delay0 00:06:57.180 01:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.482 01:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:57.764 NULL1 00:06:57.764 01:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:58.022 01:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2150345 00:06:58.022 01:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:58.022 01:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:06:58.022 01:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.022 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.399 Read completed with error (sct=0, sc=11) 00:06:59.399 01:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.399 01:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:59.399 01:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:59.657 true 00:06:59.657 01:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:06:59.657 01:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.595 01:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.854 01:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:00.854 01:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:00.854 true 00:07:01.114 01:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:01.114 01:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.114 01:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.680 01:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:01.680 01:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:01.680 true 00:07:01.680 01:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:01.680 01:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.613 01:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.870 01:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:02.870 01:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:03.129 true 00:07:03.129 01:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:03.129 01:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.386 01:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.643 01:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:03.643 01:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:03.901 true 00:07:03.901 01:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:03.901 01:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.839 01:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.096 01:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:05.096 01:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:05.354 true 00:07:05.354 01:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:05.354 01:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.612 01:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.870 01:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:05.870 01:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:06.128 true 00:07:06.128 01:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:06.128 01:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.065 01:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.065 01:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:07.065 01:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:07.323 true 00:07:07.323 01:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:07.323 01:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.580 01:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.837 01:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:07.837 01:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:08.096 true 00:07:08.096 01:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:08.096 01:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.034 01:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.291 01:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:09.291 01:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:09.549 true 00:07:09.549 01:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:09.549 01:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.806 01:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.065 01:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:10.066 01:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:10.066 true 00:07:10.066 01:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:10.066 01:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.000 01:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.258 01:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:11.258 01:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:11.516 true 00:07:11.516 01:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:11.516 01:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.773 01:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.030 01:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:12.030 01:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:12.287 true 00:07:12.287 01:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:12.287 01:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.544 01:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.802 01:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:12.802 01:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:13.060 true 00:07:13.060 01:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:13.060 01:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.998 01:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.297 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.297 01:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:14.297 01:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:14.554 true 00:07:14.554 01:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:14.554 01:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.811 01:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.068 01:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:15.068 01:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:15.326 true 00:07:15.326 01:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:15.326 01:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.263 01:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.522 01:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:16.522 01:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:16.780 true 00:07:16.780 01:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:16.780 01:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.038 01:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.298 01:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:17.298 01:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:17.298 true 00:07:17.556 01:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:17.556 01:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.556 01:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.814 01:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:17.814 01:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:18.072 true 00:07:18.072 01:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:18.072 01:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.448 01:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.448 01:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:19.448 01:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:19.707 true 00:07:19.707 01:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:19.707 01:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.965 01:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.223 01:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:20.223 01:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:20.481 true 00:07:20.481 01:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:20.481 01:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.417 01:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.675 01:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:21.675 01:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:21.933 true 00:07:21.933 01:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:21.933 01:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.192 01:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.451 01:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:22.451 01:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:22.451 true 00:07:22.711 01:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:22.711 01:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.278 01:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.536 01:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:23.536 01:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:23.793 true 00:07:23.793 01:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:23.793 01:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.052 01:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.310 01:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:24.310 01:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:24.568 true 00:07:24.568 01:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:24.568 01:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.506 01:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.764 01:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:25.764 01:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:26.022 true 00:07:26.022 01:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:26.022 01:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.281 01:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.539 01:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:26.539 01:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:26.797 true 00:07:26.797 01:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:26.797 01:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.734 01:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.995 01:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:27.995 01:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:27.995 true 00:07:28.254 01:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:28.254 01:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.254 Initializing NVMe Controllers 00:07:28.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:28.254 Controller IO queue size 128, less than required. 00:07:28.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:28.254 Controller IO queue size 128, less than required. 00:07:28.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:28.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:28.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:28.254 Initialization complete. Launching workers. 00:07:28.254 ======================================================== 00:07:28.254 Latency(us) 00:07:28.254 Device Information : IOPS MiB/s Average min max 00:07:28.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 705.84 0.34 94251.64 2503.64 1040313.67 00:07:28.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11217.04 5.48 11410.88 3471.51 451827.02 00:07:28.254 ======================================================== 00:07:28.254 Total : 11922.88 5.82 16315.08 2503.64 1040313.67 00:07:28.254 00:07:28.254 01:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.511 01:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:28.511 01:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:28.769 true 00:07:28.769 01:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2150345 00:07:28.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2150345) - No such process 00:07:28.769 01:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2150345 00:07:28.769 01:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.027 01:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.284 01:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:29.284 01:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:29.284 01:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:29.284 01:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:29.284 01:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:29.541 null0 00:07:29.541 01:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:29.541 01:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:29.541 01:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:29.798 null1 00:07:29.798 01:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:29.798 01:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:29.798 01:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:30.075 null2 00:07:30.075 01:43:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:30.075 01:43:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.075 01:43:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:30.348 null3 00:07:30.348 01:43:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:30.348 01:43:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.348 01:43:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:30.606 null4 00:07:30.606 01:43:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:30.606 01:43:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.606 01:43:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:30.863 null5 00:07:30.863 01:43:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:30.863 01:43:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.863 01:43:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:31.120 null6 00:07:31.120 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.120 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.120 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:31.378 null7 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.378 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2154404 2154406 2154409 2154412 2154416 2154419 2154422 2154426 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.379 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:31.636 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:31.636 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.636 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.636 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.636 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:31.636 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:31.636 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:31.636 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.894 01:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.152 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:32.152 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.152 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:32.152 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:32.152 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:32.152 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.152 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:32.152 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.410 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:32.668 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.668 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:32.668 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:32.668 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:32.668 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:32.668 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:32.668 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:32.668 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.927 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.185 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.185 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.185 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:33.185 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.185 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.185 01:43:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.185 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:33.185 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.444 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.444 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:33.444 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:33.444 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:33.444 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:33.444 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:33.702 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.703 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.961 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:33.961 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.961 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.961 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:33.961 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:33.961 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:33.961 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:33.961 01:43:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.219 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:34.477 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.477 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:34.477 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.477 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.477 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:34.477 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:34.477 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:34.477 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:34.735 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.735 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.735 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:34.735 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.735 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.735 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:34.735 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.735 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.735 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:34.735 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.735 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.735 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:34.736 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.736 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.736 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:34.736 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.736 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.736 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:34.736 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.736 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.736 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:34.736 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.736 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.736 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:34.994 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.994 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:34.994 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.994 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:34.994 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.994 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:34.994 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:34.994 01:43:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.252 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.510 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.510 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.510 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.510 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:35.510 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.510 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.510 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.510 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.769 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.027 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.027 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.027 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.027 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.028 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.028 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.028 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.028 01:43:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.286 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.287 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.545 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.545 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.545 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.545 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.545 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.545 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.545 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:36.545 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.803 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.803 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.803 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.803 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.803 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.803 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.803 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.803 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.804 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.804 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.804 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.804 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.804 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.804 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.804 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.804 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.804 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:36.804 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:36.804 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:36.804 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:36.804 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:36.804 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:36.804 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:36.804 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:37.063 rmmod nvme_tcp 00:07:37.063 rmmod nvme_fabrics 00:07:37.063 rmmod nvme_keyring 00:07:37.063 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:37.063 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:37.063 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:37.063 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2149931 ']' 00:07:37.063 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2149931 00:07:37.063 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2149931 ']' 00:07:37.063 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2149931 00:07:37.063 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:37.063 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.063 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2149931 00:07:37.063 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:37.063 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:37.063 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2149931' 00:07:37.063 killing process with pid 2149931 00:07:37.063 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2149931 00:07:37.063 01:43:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2149931 00:07:37.322 01:43:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:37.322 01:43:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:37.322 01:43:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:37.322 01:43:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.322 01:43:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:37.322 01:43:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.322 01:43:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.322 01:43:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.224 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:39.224 00:07:39.224 real 0m46.029s 00:07:39.224 user 3m30.154s 00:07:39.224 sys 0m16.033s 00:07:39.224 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.224 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:39.224 ************************************ 00:07:39.224 END TEST nvmf_ns_hotplug_stress 00:07:39.224 ************************************ 00:07:39.224 01:43:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:39.224 01:43:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:39.224 01:43:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.224 01:43:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:39.224 ************************************ 00:07:39.224 START TEST nvmf_delete_subsystem 00:07:39.224 ************************************ 00:07:39.224 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:39.483 * Looking for test storage... 00:07:39.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.483 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.484 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.484 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.484 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:39.484 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:39.484 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.484 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:39.484 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:39.484 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:39.484 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.484 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.484 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.484 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:39.484 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:39.484 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:39.484 01:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.386 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.386 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.386 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.386 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.386 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.386 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.386 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.386 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:41.387 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:41.387 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:41.387 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:41.387 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:07:41.387 00:07:41.387 --- 10.0.0.2 ping statistics --- 00:07:41.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.387 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:07:41.387 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:07:41.387 00:07:41.387 --- 10.0.0.1 ping statistics --- 00:07:41.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.387 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:07:41.388 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.388 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:41.388 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:41.388 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.388 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:41.388 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:41.388 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.388 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:41.388 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:41.645 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:41.645 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:41.645 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:41.645 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.645 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2157173 00:07:41.645 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:41.645 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2157173 00:07:41.645 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2157173 ']' 00:07:41.645 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.645 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.645 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.645 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.645 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.645 [2024-07-26 01:43:23.453793] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:07:41.646 [2024-07-26 01:43:23.453871] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.646 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.646 [2024-07-26 01:43:23.517891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:41.646 [2024-07-26 01:43:23.607120] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.646 [2024-07-26 01:43:23.607200] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.646 [2024-07-26 01:43:23.607214] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.646 [2024-07-26 01:43:23.607225] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.646 [2024-07-26 01:43:23.607235] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.646 [2024-07-26 01:43:23.607291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.646 [2024-07-26 01:43:23.607294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.903 [2024-07-26 01:43:23.753309] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.903 [2024-07-26 01:43:23.769593] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.903 NULL1 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.903 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.904 Delay0 00:07:41.904 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.904 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.904 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.904 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.904 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.904 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2157207 00:07:41.904 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:41.904 01:43:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:41.904 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.904 [2024-07-26 01:43:23.844210] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:43.799 01:43:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.799 01:43:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.799 01:43:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 [2024-07-26 01:43:25.980783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1524d40 is same with the state(5) to be set 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 starting I/O failed: -6 00:07:44.058 [2024-07-26 01:43:25.981451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f74e0000c00 is same with the state(5) to be set 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.058 Read completed with error (sct=0, sc=8) 00:07:44.058 Write completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Write completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Write completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Write completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Write completed with error (sct=0, sc=8) 00:07:44.059 Write completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Write completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Write completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Write completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Write completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Write completed with error (sct=0, sc=8) 00:07:44.059 Write completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Write completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Write completed with error (sct=0, sc=8) 00:07:44.059 Write completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Write completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 Read completed with error (sct=0, sc=8) 00:07:44.059 [2024-07-26 01:43:25.981938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151fce0 is same with the state(5) to be set 00:07:44.992 [2024-07-26 01:43:26.941354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c630 is same with the state(5) to be set 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 [2024-07-26 01:43:26.978956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f74e000d000 is same with the state(5) to be set 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 [2024-07-26 01:43:26.984695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f74e000d660 is same with the state(5) to be set 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 [2024-07-26 01:43:26.985136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151fb00 is same with the state(5) to be set 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.992 Write completed with error (sct=0, sc=8) 00:07:44.992 Read completed with error (sct=0, sc=8) 00:07:44.993 Read completed with error (sct=0, sc=8) 00:07:44.993 Write completed with error (sct=0, sc=8) 00:07:44.993 Read completed with error (sct=0, sc=8) 00:07:44.993 Read completed with error (sct=0, sc=8) 00:07:44.993 Read completed with error (sct=0, sc=8) 00:07:44.993 Read completed with error (sct=0, sc=8) 00:07:44.993 Read completed with error (sct=0, sc=8) 00:07:44.993 Read completed with error (sct=0, sc=8) 00:07:44.993 Read completed with error (sct=0, sc=8) 00:07:44.993 Read completed with error (sct=0, sc=8) 00:07:44.993 Write completed with error (sct=0, sc=8) 00:07:44.993 Read completed with error (sct=0, sc=8) 00:07:44.993 Write completed with error (sct=0, sc=8) 00:07:44.993 Read completed with error (sct=0, sc=8) 00:07:44.993 Read completed with error (sct=0, sc=8) 00:07:44.993 Read completed with error (sct=0, sc=8) 00:07:44.993 [2024-07-26 01:43:26.985310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520010 is same with the state(5) to be set 00:07:44.993 Initializing NVMe Controllers 00:07:44.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:44.993 Controller IO queue size 128, less than required. 00:07:44.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:44.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:44.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:44.993 Initialization complete. Launching workers. 00:07:44.993 ======================================================== 00:07:44.993 Latency(us) 00:07:44.993 Device Information : IOPS MiB/s Average min max 00:07:44.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.80 0.08 914215.24 1165.53 1013797.93 00:07:44.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 146.91 0.07 996258.24 488.39 2001809.42 00:07:44.993 ======================================================== 00:07:44.993 Total : 308.72 0.15 953258.21 488.39 2001809.42 00:07:44.993 00:07:44.993 [2024-07-26 01:43:26.986157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c630 (9): Bad file descriptor 00:07:44.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:44.993 01:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.993 01:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:44.993 01:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2157207 00:07:44.993 01:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2157207 00:07:45.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2157207) - No such process 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2157207 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2157207 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2157207 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.559 [2024-07-26 01:43:27.508619] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2157616 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2157616 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:45.559 01:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:45.559 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.856 [2024-07-26 01:43:27.572576] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:46.166 01:43:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.166 01:43:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2157616 00:07:46.166 01:43:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.734 01:43:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.735 01:43:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2157616 00:07:46.735 01:43:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:47.299 01:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.299 01:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2157616 00:07:47.299 01:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:47.557 01:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.557 01:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2157616 00:07:47.557 01:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.123 01:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:48.123 01:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2157616 00:07:48.123 01:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.689 01:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:48.689 01:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2157616 00:07:48.689 01:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.689 Initializing NVMe Controllers 00:07:48.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:48.689 Controller IO queue size 128, less than required. 00:07:48.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:48.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:48.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:48.689 Initialization complete. Launching workers. 00:07:48.689 ======================================================== 00:07:48.689 Latency(us) 00:07:48.689 Device Information : IOPS MiB/s Average min max 00:07:48.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003808.71 1000309.02 1012700.23 00:07:48.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004985.86 1000218.62 1012598.13 00:07:48.689 ======================================================== 00:07:48.689 Total : 256.00 0.12 1004397.28 1000218.62 1012700.23 00:07:48.689 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2157616 00:07:49.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2157616) - No such process 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2157616 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:49.255 rmmod nvme_tcp 00:07:49.255 rmmod nvme_fabrics 00:07:49.255 rmmod nvme_keyring 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2157173 ']' 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2157173 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2157173 ']' 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2157173 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2157173 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2157173' 00:07:49.255 killing process with pid 2157173 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2157173 00:07:49.255 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2157173 00:07:49.515 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:49.515 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:49.515 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:49.515 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:49.515 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:49.515 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.515 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.515 01:43:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.419 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:51.419 00:07:51.419 real 0m12.176s 00:07:51.419 user 0m27.582s 00:07:51.419 sys 0m2.898s 00:07:51.419 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.419 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.419 ************************************ 00:07:51.419 END TEST nvmf_delete_subsystem 00:07:51.419 ************************************ 00:07:51.419 01:43:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:51.419 01:43:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:51.419 01:43:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.419 01:43:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.678 ************************************ 00:07:51.678 START TEST nvmf_host_management 00:07:51.678 ************************************ 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:51.678 * Looking for test storage... 00:07:51.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:51.678 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:51.679 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:51.679 01:43:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:53.586 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:53.586 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:53.586 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:53.586 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.586 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:53.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:07:53.587 00:07:53.587 --- 10.0.0.2 ping statistics --- 00:07:53.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.587 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:07:53.587 00:07:53.587 --- 10.0.0.1 ping statistics --- 00:07:53.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.587 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2159955 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2159955 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2159955 ']' 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.587 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.587 [2024-07-26 01:43:35.591819] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:07:53.587 [2024-07-26 01:43:35.591903] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.844 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.844 [2024-07-26 01:43:35.658539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.844 [2024-07-26 01:43:35.748299] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.844 [2024-07-26 01:43:35.748373] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.844 [2024-07-26 01:43:35.748388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.844 [2024-07-26 01:43:35.748400] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.844 [2024-07-26 01:43:35.748410] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.844 [2024-07-26 01:43:35.748473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.844 [2024-07-26 01:43:35.748533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.844 [2024-07-26 01:43:35.748631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:53.844 [2024-07-26 01:43:35.748635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 [2024-07-26 01:43:35.893582] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 Malloc0 00:07:54.102 [2024-07-26 01:43:35.954104] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2160124 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2160124 /var/tmp/bdevperf.sock 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2160124 ']' 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:54.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:54.102 { 00:07:54.102 "params": { 00:07:54.102 "name": "Nvme$subsystem", 00:07:54.102 "trtype": "$TEST_TRANSPORT", 00:07:54.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:54.102 "adrfam": "ipv4", 00:07:54.102 "trsvcid": "$NVMF_PORT", 00:07:54.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:54.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:54.102 "hdgst": ${hdgst:-false}, 00:07:54.102 "ddgst": ${ddgst:-false} 00:07:54.102 }, 00:07:54.102 "method": "bdev_nvme_attach_controller" 00:07:54.102 } 00:07:54.102 EOF 00:07:54.102 )") 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:54.102 01:43:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:54.102 "params": { 00:07:54.102 "name": "Nvme0", 00:07:54.102 "trtype": "tcp", 00:07:54.102 "traddr": "10.0.0.2", 00:07:54.102 "adrfam": "ipv4", 00:07:54.102 "trsvcid": "4420", 00:07:54.102 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:54.102 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:54.102 "hdgst": false, 00:07:54.102 "ddgst": false 00:07:54.102 }, 00:07:54.102 "method": "bdev_nvme_attach_controller" 00:07:54.102 }' 00:07:54.102 [2024-07-26 01:43:36.032416] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:07:54.103 [2024-07-26 01:43:36.032502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160124 ] 00:07:54.103 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.103 [2024-07-26 01:43:36.094024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.360 [2024-07-26 01:43:36.180590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.617 Running I/O for 10 seconds... 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:54.617 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:54.874 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:54.874 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:54.874 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:54.874 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:54.874 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.874 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.874 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.874 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=534 00:07:54.875 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 534 -ge 100 ']' 00:07:54.875 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:54.875 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:54.875 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:54.875 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:54.875 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.875 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.875 [2024-07-26 01:43:36.873196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3e80 is same with the state(5) to be set 00:07:54.875 [2024-07-26 01:43:36.873305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3e80 is same with the state(5) to be set 00:07:54.875 [2024-07-26 01:43:36.873322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3e80 is same with the state(5) to be set 00:07:54.875 [2024-07-26 01:43:36.873334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3e80 is same with the state(5) to be set 00:07:54.875 [2024-07-26 01:43:36.873347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3e80 is same with the state(5) to be set 00:07:54.875 [2024-07-26 01:43:36.873367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3e80 is same with the state(5) to be set 00:07:54.875 [2024-07-26 01:43:36.873379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3e80 is same with the state(5) to be set 00:07:54.875 [2024-07-26 01:43:36.873392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3e80 is same with the state(5) to be set 00:07:54.875 [2024-07-26 01:43:36.873414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3e80 is same with the state(5) to be set 00:07:54.875 [2024-07-26 01:43:36.873427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3e80 is same with the state(5) to be set 00:07:54.875 [2024-07-26 01:43:36.873439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3e80 is same with the state(5) to be set 00:07:54.875 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.875 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:54.875 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.875 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.875 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.875 01:43:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:55.132 [2024-07-26 01:43:36.887635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:55.132 [2024-07-26 01:43:36.887679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.132 [2024-07-26 01:43:36.887698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:55.132 [2024-07-26 01:43:36.887713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.132 [2024-07-26 01:43:36.887731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:55.132 [2024-07-26 01:43:36.887745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.132 [2024-07-26 01:43:36.887759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:55.132 [2024-07-26 01:43:36.887773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.132 [2024-07-26 01:43:36.887786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d56ed0 is same with the state(5) to be set 00:07:55.132 [2024-07-26 01:43:36.887880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.887902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.887928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.887944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.887960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.887974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.887989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.888972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.888986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.133 [2024-07-26 01:43:36.889829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.133 [2024-07-26 01:43:36.889911] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2188a90 was disconnected and freed. reset controller. 00:07:55.133 [2024-07-26 01:43:36.891026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:55.133 task offset: 81920 on job bdev=Nvme0n1 fails 00:07:55.133 00:07:55.133 Latency(us) 00:07:55.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.133 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:55.133 Job: Nvme0n1 ended in about 0.41 seconds with error 00:07:55.133 Verification LBA range: start 0x0 length 0x400 00:07:55.133 Nvme0n1 : 0.41 1553.36 97.09 155.34 0.00 36402.84 2718.53 33981.63 00:07:55.133 =================================================================================================================== 00:07:55.133 Total : 1553.36 97.09 155.34 0.00 36402.84 2718.53 33981.63 00:07:55.133 [2024-07-26 01:43:36.892893] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:55.133 [2024-07-26 01:43:36.892921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d56ed0 (9): Bad file descriptor 00:07:55.133 [2024-07-26 01:43:36.899768] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:56.065 01:43:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2160124 00:07:56.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2160124) - No such process 00:07:56.065 01:43:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:56.065 01:43:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:56.065 01:43:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:56.065 01:43:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:56.065 01:43:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:56.065 01:43:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:56.065 01:43:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:56.065 01:43:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:56.065 { 00:07:56.065 "params": { 00:07:56.065 "name": "Nvme$subsystem", 00:07:56.065 "trtype": "$TEST_TRANSPORT", 00:07:56.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:56.065 "adrfam": "ipv4", 00:07:56.065 "trsvcid": "$NVMF_PORT", 00:07:56.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:56.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:56.065 "hdgst": ${hdgst:-false}, 00:07:56.065 "ddgst": ${ddgst:-false} 00:07:56.065 }, 00:07:56.065 "method": "bdev_nvme_attach_controller" 00:07:56.065 } 00:07:56.065 EOF 00:07:56.065 )") 00:07:56.065 01:43:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:56.065 01:43:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:56.065 01:43:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:56.066 01:43:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:56.066 "params": { 00:07:56.066 "name": "Nvme0", 00:07:56.066 "trtype": "tcp", 00:07:56.066 "traddr": "10.0.0.2", 00:07:56.066 "adrfam": "ipv4", 00:07:56.066 "trsvcid": "4420", 00:07:56.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:56.066 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:56.066 "hdgst": false, 00:07:56.066 "ddgst": false 00:07:56.066 }, 00:07:56.066 "method": "bdev_nvme_attach_controller" 00:07:56.066 }' 00:07:56.066 [2024-07-26 01:43:37.930536] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:07:56.066 [2024-07-26 01:43:37.930624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160284 ] 00:07:56.066 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.066 [2024-07-26 01:43:37.994614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.323 [2024-07-26 01:43:38.082017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.580 Running I/O for 1 seconds... 00:07:57.512 00:07:57.512 Latency(us) 00:07:57.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.512 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:57.513 Verification LBA range: start 0x0 length 0x400 00:07:57.513 Nvme0n1 : 1.02 1627.58 101.72 0.00 0.00 38606.68 6262.33 37282.70 00:07:57.513 =================================================================================================================== 00:07:57.513 Total : 1627.58 101.72 0.00 0.00 38606.68 6262.33 37282.70 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:57.770 rmmod nvme_tcp 00:07:57.770 rmmod nvme_fabrics 00:07:57.770 rmmod nvme_keyring 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2159955 ']' 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2159955 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2159955 ']' 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2159955 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2159955 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2159955' 00:07:57.770 killing process with pid 2159955 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2159955 00:07:57.770 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2159955 00:07:58.028 [2024-07-26 01:43:39.946337] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:58.028 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:58.028 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:58.029 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:58.029 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:58.029 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:58.029 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.029 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.029 01:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:00.558 00:08:00.558 real 0m8.586s 00:08:00.558 user 0m19.877s 00:08:00.558 sys 0m2.509s 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.558 ************************************ 00:08:00.558 END TEST nvmf_host_management 00:08:00.558 ************************************ 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.558 ************************************ 00:08:00.558 START TEST nvmf_lvol 00:08:00.558 ************************************ 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:00.558 * Looking for test storage... 00:08:00.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.558 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.559 01:43:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.459 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:02.460 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:02.460 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:02.460 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:02.460 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:02.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:08:02.460 00:08:02.460 --- 10.0.0.2 ping statistics --- 00:08:02.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.460 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:08:02.460 00:08:02.460 --- 10.0.0.1 ping statistics --- 00:08:02.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.460 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.460 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:02.461 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2162482 00:08:02.461 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:02.461 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2162482 00:08:02.461 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2162482 ']' 00:08:02.461 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.461 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.461 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.461 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.461 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:02.461 [2024-07-26 01:43:44.245972] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:08:02.461 [2024-07-26 01:43:44.246069] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.461 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.461 [2024-07-26 01:43:44.317184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:02.461 [2024-07-26 01:43:44.407124] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.461 [2024-07-26 01:43:44.407188] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.461 [2024-07-26 01:43:44.407214] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.461 [2024-07-26 01:43:44.407228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.461 [2024-07-26 01:43:44.407240] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.461 [2024-07-26 01:43:44.407324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.461 [2024-07-26 01:43:44.407395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.461 [2024-07-26 01:43:44.407398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.718 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.718 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:02.718 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:02.718 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:02.718 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:02.718 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.718 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:02.975 [2024-07-26 01:43:44.788144] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.975 01:43:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:03.234 01:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:03.234 01:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:03.492 01:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:03.492 01:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:03.750 01:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:04.008 01:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=129915b6-006a-4d38-aa87-4fa2059cbd78 00:08:04.008 01:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 129915b6-006a-4d38-aa87-4fa2059cbd78 lvol 20 00:08:04.266 01:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=911faf70-2320-4886-b495-59b49dcb57e5 00:08:04.266 01:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:04.523 01:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 911faf70-2320-4886-b495-59b49dcb57e5 00:08:04.780 01:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:05.038 [2024-07-26 01:43:46.829279] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.038 01:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.296 01:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2162903 00:08:05.296 01:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:05.296 01:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:05.296 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.234 01:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 911faf70-2320-4886-b495-59b49dcb57e5 MY_SNAPSHOT 00:08:06.492 01:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1b1e8c71-3023-4e7f-ad12-348bb9cd29ba 00:08:06.492 01:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 911faf70-2320-4886-b495-59b49dcb57e5 30 00:08:06.750 01:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1b1e8c71-3023-4e7f-ad12-348bb9cd29ba MY_CLONE 00:08:07.007 01:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4dab3d64-4d82-47f2-9ddc-c89b8ac9f310 00:08:07.007 01:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4dab3d64-4d82-47f2-9ddc-c89b8ac9f310 00:08:07.945 01:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2162903 00:08:16.083 Initializing NVMe Controllers 00:08:16.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:16.083 Controller IO queue size 128, less than required. 00:08:16.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:16.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:16.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:16.084 Initialization complete. Launching workers. 00:08:16.084 ======================================================== 00:08:16.084 Latency(us) 00:08:16.084 Device Information : IOPS MiB/s Average min max 00:08:16.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10602.10 41.41 12082.38 1158.47 93390.76 00:08:16.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10587.00 41.36 12093.59 2001.85 69046.21 00:08:16.084 ======================================================== 00:08:16.084 Total : 21189.10 82.77 12087.98 1158.47 93390.76 00:08:16.084 00:08:16.084 01:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:16.084 01:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 911faf70-2320-4886-b495-59b49dcb57e5 00:08:16.084 01:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 129915b6-006a-4d38-aa87-4fa2059cbd78 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.343 rmmod nvme_tcp 00:08:16.343 rmmod nvme_fabrics 00:08:16.343 rmmod nvme_keyring 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2162482 ']' 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2162482 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2162482 ']' 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2162482 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2162482 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2162482' 00:08:16.343 killing process with pid 2162482 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2162482 00:08:16.343 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2162482 00:08:16.601 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:16.601 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:16.601 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:16.601 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:16.601 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:16.602 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.602 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.602 01:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:19.142 00:08:19.142 real 0m18.574s 00:08:19.142 user 1m4.030s 00:08:19.142 sys 0m5.320s 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:19.142 ************************************ 00:08:19.142 END TEST nvmf_lvol 00:08:19.142 ************************************ 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.142 ************************************ 00:08:19.142 START TEST nvmf_lvs_grow 00:08:19.142 ************************************ 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:19.142 * Looking for test storage... 00:08:19.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.142 01:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.050 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.050 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.050 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.050 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.050 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.050 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.050 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.050 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.050 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.050 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:21.050 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.050 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:21.050 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.050 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:21.050 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.050 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:21.051 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:21.051 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:21.051 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:21.051 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:08:21.051 00:08:21.051 --- 10.0.0.2 ping statistics --- 00:08:21.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.051 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:08:21.051 00:08:21.051 --- 10.0.0.1 ping statistics --- 00:08:21.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.051 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.051 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2166184 00:08:21.052 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:21.052 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2166184 00:08:21.052 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2166184 ']' 00:08:21.052 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.052 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.052 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.052 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.052 01:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.052 [2024-07-26 01:44:02.992331] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:08:21.052 [2024-07-26 01:44:02.992431] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.052 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.314 [2024-07-26 01:44:03.064680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.314 [2024-07-26 01:44:03.159843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.314 [2024-07-26 01:44:03.159925] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.314 [2024-07-26 01:44:03.159941] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.314 [2024-07-26 01:44:03.159954] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.314 [2024-07-26 01:44:03.159965] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.314 [2024-07-26 01:44:03.160004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.314 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.314 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:21.314 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:21.314 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:21.314 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.314 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.314 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:21.571 [2024-07-26 01:44:03.535752] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.571 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:21.571 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.571 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.571 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.829 ************************************ 00:08:21.829 START TEST lvs_grow_clean 00:08:21.829 ************************************ 00:08:21.829 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:21.829 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:21.829 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:21.829 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:21.829 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:21.829 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:21.829 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:21.829 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:21.829 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:21.829 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.089 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:22.089 01:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:22.348 01:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=259934f4-550d-41f8-be53-4f362b150c59 00:08:22.348 01:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259934f4-550d-41f8-be53-4f362b150c59 00:08:22.348 01:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:22.348 01:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:22.349 01:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:22.607 01:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 259934f4-550d-41f8-be53-4f362b150c59 lvol 150 00:08:22.865 01:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ed3fdb4a-c743-45f2-96d0-7908287fb1eb 00:08:22.865 01:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:22.865 01:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:23.123 [2024-07-26 01:44:04.877330] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:23.123 [2024-07-26 01:44:04.877445] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:23.123 true 00:08:23.123 01:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259934f4-550d-41f8-be53-4f362b150c59 00:08:23.123 01:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:23.383 01:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:23.383 01:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:23.383 01:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ed3fdb4a-c743-45f2-96d0-7908287fb1eb 00:08:23.951 01:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:23.951 [2024-07-26 01:44:05.908507] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.951 01:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:24.210 01:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2166623 00:08:24.210 01:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:24.210 01:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:24.210 01:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2166623 /var/tmp/bdevperf.sock 00:08:24.210 01:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2166623 ']' 00:08:24.210 01:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:24.210 01:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.210 01:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:24.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:24.210 01:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.210 01:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:24.470 [2024-07-26 01:44:06.254996] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:08:24.470 [2024-07-26 01:44:06.255100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166623 ] 00:08:24.470 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.470 [2024-07-26 01:44:06.323239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.470 [2024-07-26 01:44:06.413204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.729 01:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.729 01:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:24.729 01:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:24.987 Nvme0n1 00:08:24.987 01:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:25.246 [ 00:08:25.246 { 00:08:25.246 "name": "Nvme0n1", 00:08:25.246 "aliases": [ 00:08:25.246 "ed3fdb4a-c743-45f2-96d0-7908287fb1eb" 00:08:25.246 ], 00:08:25.246 "product_name": "NVMe disk", 00:08:25.246 "block_size": 4096, 00:08:25.246 "num_blocks": 38912, 00:08:25.246 "uuid": "ed3fdb4a-c743-45f2-96d0-7908287fb1eb", 00:08:25.246 "assigned_rate_limits": { 00:08:25.246 "rw_ios_per_sec": 0, 00:08:25.246 "rw_mbytes_per_sec": 0, 00:08:25.246 "r_mbytes_per_sec": 0, 00:08:25.246 "w_mbytes_per_sec": 0 00:08:25.246 }, 00:08:25.246 "claimed": false, 00:08:25.246 "zoned": false, 00:08:25.246 "supported_io_types": { 00:08:25.246 "read": true, 00:08:25.246 "write": true, 00:08:25.246 "unmap": true, 00:08:25.246 "flush": true, 00:08:25.246 "reset": true, 00:08:25.246 "nvme_admin": true, 00:08:25.246 "nvme_io": true, 00:08:25.246 "nvme_io_md": false, 00:08:25.246 "write_zeroes": true, 00:08:25.246 "zcopy": false, 00:08:25.246 "get_zone_info": false, 00:08:25.246 "zone_management": false, 00:08:25.246 "zone_append": false, 00:08:25.246 "compare": true, 00:08:25.246 "compare_and_write": true, 00:08:25.246 "abort": true, 00:08:25.246 "seek_hole": false, 00:08:25.246 "seek_data": false, 00:08:25.246 "copy": true, 00:08:25.246 "nvme_iov_md": false 00:08:25.246 }, 00:08:25.246 "memory_domains": [ 00:08:25.246 { 00:08:25.246 "dma_device_id": "system", 00:08:25.246 "dma_device_type": 1 00:08:25.246 } 00:08:25.246 ], 00:08:25.246 "driver_specific": { 00:08:25.246 "nvme": [ 00:08:25.246 { 00:08:25.246 "trid": { 00:08:25.246 "trtype": "TCP", 00:08:25.246 "adrfam": "IPv4", 00:08:25.246 "traddr": "10.0.0.2", 00:08:25.246 "trsvcid": "4420", 00:08:25.246 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:25.246 }, 00:08:25.246 "ctrlr_data": { 00:08:25.246 "cntlid": 1, 00:08:25.246 "vendor_id": "0x8086", 00:08:25.246 "model_number": "SPDK bdev Controller", 00:08:25.246 "serial_number": "SPDK0", 00:08:25.246 "firmware_revision": "24.09", 00:08:25.246 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:25.246 "oacs": { 00:08:25.246 "security": 0, 00:08:25.246 "format": 0, 00:08:25.246 "firmware": 0, 00:08:25.246 "ns_manage": 0 00:08:25.246 }, 00:08:25.246 "multi_ctrlr": true, 00:08:25.246 "ana_reporting": false 00:08:25.246 }, 00:08:25.246 "vs": { 00:08:25.246 "nvme_version": "1.3" 00:08:25.246 }, 00:08:25.246 "ns_data": { 00:08:25.246 "id": 1, 00:08:25.246 "can_share": true 00:08:25.246 } 00:08:25.246 } 00:08:25.246 ], 00:08:25.246 "mp_policy": "active_passive" 00:08:25.246 } 00:08:25.246 } 00:08:25.246 ] 00:08:25.246 01:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2166723 00:08:25.246 01:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:25.246 01:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:25.506 Running I/O for 10 seconds... 00:08:26.446 Latency(us) 00:08:26.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.446 Nvme0n1 : 1.00 14554.00 56.85 0.00 0.00 0.00 0.00 0.00 00:08:26.446 =================================================================================================================== 00:08:26.446 Total : 14554.00 56.85 0.00 0.00 0.00 0.00 0.00 00:08:26.446 00:08:27.385 01:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 259934f4-550d-41f8-be53-4f362b150c59 00:08:27.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.385 Nvme0n1 : 2.00 14678.50 57.34 0.00 0.00 0.00 0.00 0.00 00:08:27.385 =================================================================================================================== 00:08:27.385 Total : 14678.50 57.34 0.00 0.00 0.00 0.00 0.00 00:08:27.385 00:08:27.644 true 00:08:27.644 01:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259934f4-550d-41f8-be53-4f362b150c59 00:08:27.644 01:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:27.903 01:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:27.903 01:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:27.903 01:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2166723 00:08:28.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.474 Nvme0n1 : 3.00 14788.33 57.77 0.00 0.00 0.00 0.00 0.00 00:08:28.474 =================================================================================================================== 00:08:28.474 Total : 14788.33 57.77 0.00 0.00 0.00 0.00 0.00 00:08:28.474 00:08:29.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.422 Nvme0n1 : 4.00 14870.25 58.09 0.00 0.00 0.00 0.00 0.00 00:08:29.422 =================================================================================================================== 00:08:29.422 Total : 14870.25 58.09 0.00 0.00 0.00 0.00 0.00 00:08:29.422 00:08:30.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.800 Nvme0n1 : 5.00 14957.20 58.43 0.00 0.00 0.00 0.00 0.00 00:08:30.800 =================================================================================================================== 00:08:30.800 Total : 14957.20 58.43 0.00 0.00 0.00 0.00 0.00 00:08:30.800 00:08:31.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.396 Nvme0n1 : 6.00 14983.17 58.53 0.00 0.00 0.00 0.00 0.00 00:08:31.396 =================================================================================================================== 00:08:31.396 Total : 14983.17 58.53 0.00 0.00 0.00 0.00 0.00 00:08:31.396 00:08:32.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.773 Nvme0n1 : 7.00 15020.57 58.67 0.00 0.00 0.00 0.00 0.00 00:08:32.773 =================================================================================================================== 00:08:32.773 Total : 15020.57 58.67 0.00 0.00 0.00 0.00 0.00 00:08:32.773 00:08:33.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.712 Nvme0n1 : 8.00 15068.38 58.86 0.00 0.00 0.00 0.00 0.00 00:08:33.712 =================================================================================================================== 00:08:33.712 Total : 15068.38 58.86 0.00 0.00 0.00 0.00 0.00 00:08:33.712 00:08:34.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.649 Nvme0n1 : 9.00 15102.56 58.99 0.00 0.00 0.00 0.00 0.00 00:08:34.649 =================================================================================================================== 00:08:34.649 Total : 15102.56 58.99 0.00 0.00 0.00 0.00 0.00 00:08:34.649 00:08:35.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.585 Nvme0n1 : 10.00 15136.70 59.13 0.00 0.00 0.00 0.00 0.00 00:08:35.585 =================================================================================================================== 00:08:35.585 Total : 15136.70 59.13 0.00 0.00 0.00 0.00 0.00 00:08:35.585 00:08:35.585 00:08:35.585 Latency(us) 00:08:35.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.585 Nvme0n1 : 10.01 15136.85 59.13 0.00 0.00 8451.01 5000.15 15534.46 00:08:35.585 =================================================================================================================== 00:08:35.585 Total : 15136.85 59.13 0.00 0.00 8451.01 5000.15 15534.46 00:08:35.585 0 00:08:35.585 01:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2166623 00:08:35.585 01:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2166623 ']' 00:08:35.585 01:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2166623 00:08:35.585 01:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:35.585 01:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:35.585 01:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2166623 00:08:35.585 01:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:35.585 01:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:35.585 01:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2166623' 00:08:35.585 killing process with pid 2166623 00:08:35.585 01:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2166623 00:08:35.585 Received shutdown signal, test time was about 10.000000 seconds 00:08:35.585 00:08:35.585 Latency(us) 00:08:35.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.585 =================================================================================================================== 00:08:35.585 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:35.585 01:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2166623 00:08:35.843 01:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:36.100 01:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:36.358 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259934f4-550d-41f8-be53-4f362b150c59 00:08:36.358 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:36.615 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:36.615 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:36.616 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:36.875 [2024-07-26 01:44:18.642881] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:36.875 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259934f4-550d-41f8-be53-4f362b150c59 00:08:36.875 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:36.875 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259934f4-550d-41f8-be53-4f362b150c59 00:08:36.875 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.875 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.875 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.875 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.875 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.875 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.875 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.875 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:36.875 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259934f4-550d-41f8-be53-4f362b150c59 00:08:37.136 request: 00:08:37.136 { 00:08:37.136 "uuid": "259934f4-550d-41f8-be53-4f362b150c59", 00:08:37.136 "method": "bdev_lvol_get_lvstores", 00:08:37.136 "req_id": 1 00:08:37.136 } 00:08:37.136 Got JSON-RPC error response 00:08:37.136 response: 00:08:37.136 { 00:08:37.136 "code": -19, 00:08:37.136 "message": "No such device" 00:08:37.136 } 00:08:37.136 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:37.136 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:37.136 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:37.136 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:37.136 01:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:37.395 aio_bdev 00:08:37.395 01:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ed3fdb4a-c743-45f2-96d0-7908287fb1eb 00:08:37.395 01:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=ed3fdb4a-c743-45f2-96d0-7908287fb1eb 00:08:37.395 01:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:37.395 01:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:37.395 01:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:37.395 01:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:37.395 01:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:37.652 01:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ed3fdb4a-c743-45f2-96d0-7908287fb1eb -t 2000 00:08:37.912 [ 00:08:37.912 { 00:08:37.912 "name": "ed3fdb4a-c743-45f2-96d0-7908287fb1eb", 00:08:37.912 "aliases": [ 00:08:37.912 "lvs/lvol" 00:08:37.912 ], 00:08:37.912 "product_name": "Logical Volume", 00:08:37.912 "block_size": 4096, 00:08:37.912 "num_blocks": 38912, 00:08:37.912 "uuid": "ed3fdb4a-c743-45f2-96d0-7908287fb1eb", 00:08:37.912 "assigned_rate_limits": { 00:08:37.912 "rw_ios_per_sec": 0, 00:08:37.912 "rw_mbytes_per_sec": 0, 00:08:37.912 "r_mbytes_per_sec": 0, 00:08:37.912 "w_mbytes_per_sec": 0 00:08:37.912 }, 00:08:37.912 "claimed": false, 00:08:37.912 "zoned": false, 00:08:37.912 "supported_io_types": { 00:08:37.912 "read": true, 00:08:37.912 "write": true, 00:08:37.912 "unmap": true, 00:08:37.912 "flush": false, 00:08:37.912 "reset": true, 00:08:37.912 "nvme_admin": false, 00:08:37.912 "nvme_io": false, 00:08:37.912 "nvme_io_md": false, 00:08:37.912 "write_zeroes": true, 00:08:37.912 "zcopy": false, 00:08:37.912 "get_zone_info": false, 00:08:37.912 "zone_management": false, 00:08:37.912 "zone_append": false, 00:08:37.912 "compare": false, 00:08:37.912 "compare_and_write": false, 00:08:37.912 "abort": false, 00:08:37.912 "seek_hole": true, 00:08:37.912 "seek_data": true, 00:08:37.912 "copy": false, 00:08:37.912 "nvme_iov_md": false 00:08:37.912 }, 00:08:37.912 "driver_specific": { 00:08:37.912 "lvol": { 00:08:37.912 "lvol_store_uuid": "259934f4-550d-41f8-be53-4f362b150c59", 00:08:37.912 "base_bdev": "aio_bdev", 00:08:37.912 "thin_provision": false, 00:08:37.912 "num_allocated_clusters": 38, 00:08:37.912 "snapshot": false, 00:08:37.912 "clone": false, 00:08:37.912 "esnap_clone": false 00:08:37.912 } 00:08:37.912 } 00:08:37.912 } 00:08:37.912 ] 00:08:37.912 01:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:37.912 01:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259934f4-550d-41f8-be53-4f362b150c59 00:08:37.912 01:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:38.170 01:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:38.170 01:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:38.170 01:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259934f4-550d-41f8-be53-4f362b150c59 00:08:38.430 01:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:38.430 01:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ed3fdb4a-c743-45f2-96d0-7908287fb1eb 00:08:38.692 01:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 259934f4-550d-41f8-be53-4f362b150c59 00:08:38.950 01:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:39.208 00:08:39.208 real 0m17.459s 00:08:39.208 user 0m15.879s 00:08:39.208 sys 0m2.328s 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:39.208 ************************************ 00:08:39.208 END TEST lvs_grow_clean 00:08:39.208 ************************************ 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.208 ************************************ 00:08:39.208 START TEST lvs_grow_dirty 00:08:39.208 ************************************ 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:39.208 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:39.466 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:39.466 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:39.726 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6618168f-d20a-4a55-bd4a-89cdd0cd30db 00:08:39.726 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6618168f-d20a-4a55-bd4a-89cdd0cd30db 00:08:39.726 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:39.986 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:39.986 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:39.986 01:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6618168f-d20a-4a55-bd4a-89cdd0cd30db lvol 150 00:08:40.245 01:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=74457fe2-6abb-44ec-b7b6-e0693750bac8 00:08:40.245 01:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:40.245 01:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:40.506 [2024-07-26 01:44:22.375371] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:40.506 [2024-07-26 01:44:22.375471] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:40.506 true 00:08:40.506 01:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6618168f-d20a-4a55-bd4a-89cdd0cd30db 00:08:40.506 01:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:40.767 01:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:40.767 01:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:41.026 01:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 74457fe2-6abb-44ec-b7b6-e0693750bac8 00:08:41.286 01:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:41.545 [2024-07-26 01:44:23.370411] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.545 01:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:41.803 01:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2168684 00:08:41.803 01:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:41.803 01:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:41.803 01:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2168684 /var/tmp/bdevperf.sock 00:08:41.803 01:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2168684 ']' 00:08:41.803 01:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:41.803 01:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.803 01:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:41.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:41.803 01:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.803 01:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:41.803 [2024-07-26 01:44:23.666488] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:08:41.803 [2024-07-26 01:44:23.666559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168684 ] 00:08:41.803 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.803 [2024-07-26 01:44:23.728089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.060 [2024-07-26 01:44:23.818592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.060 01:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.060 01:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:42.060 01:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:42.317 Nvme0n1 00:08:42.317 01:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:42.576 [ 00:08:42.576 { 00:08:42.576 "name": "Nvme0n1", 00:08:42.576 "aliases": [ 00:08:42.576 "74457fe2-6abb-44ec-b7b6-e0693750bac8" 00:08:42.576 ], 00:08:42.576 "product_name": "NVMe disk", 00:08:42.576 "block_size": 4096, 00:08:42.576 "num_blocks": 38912, 00:08:42.576 "uuid": "74457fe2-6abb-44ec-b7b6-e0693750bac8", 00:08:42.576 "assigned_rate_limits": { 00:08:42.576 "rw_ios_per_sec": 0, 00:08:42.576 "rw_mbytes_per_sec": 0, 00:08:42.576 "r_mbytes_per_sec": 0, 00:08:42.576 "w_mbytes_per_sec": 0 00:08:42.576 }, 00:08:42.576 "claimed": false, 00:08:42.576 "zoned": false, 00:08:42.576 "supported_io_types": { 00:08:42.576 "read": true, 00:08:42.576 "write": true, 00:08:42.576 "unmap": true, 00:08:42.576 "flush": true, 00:08:42.576 "reset": true, 00:08:42.576 "nvme_admin": true, 00:08:42.576 "nvme_io": true, 00:08:42.576 "nvme_io_md": false, 00:08:42.576 "write_zeroes": true, 00:08:42.576 "zcopy": false, 00:08:42.576 "get_zone_info": false, 00:08:42.576 "zone_management": false, 00:08:42.576 "zone_append": false, 00:08:42.576 "compare": true, 00:08:42.576 "compare_and_write": true, 00:08:42.576 "abort": true, 00:08:42.576 "seek_hole": false, 00:08:42.576 "seek_data": false, 00:08:42.576 "copy": true, 00:08:42.576 "nvme_iov_md": false 00:08:42.576 }, 00:08:42.576 "memory_domains": [ 00:08:42.576 { 00:08:42.576 "dma_device_id": "system", 00:08:42.576 "dma_device_type": 1 00:08:42.576 } 00:08:42.576 ], 00:08:42.576 "driver_specific": { 00:08:42.576 "nvme": [ 00:08:42.576 { 00:08:42.576 "trid": { 00:08:42.576 "trtype": "TCP", 00:08:42.576 "adrfam": "IPv4", 00:08:42.576 "traddr": "10.0.0.2", 00:08:42.576 "trsvcid": "4420", 00:08:42.576 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:42.576 }, 00:08:42.576 "ctrlr_data": { 00:08:42.576 "cntlid": 1, 00:08:42.576 "vendor_id": "0x8086", 00:08:42.576 "model_number": "SPDK bdev Controller", 00:08:42.576 "serial_number": "SPDK0", 00:08:42.576 "firmware_revision": "24.09", 00:08:42.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:42.576 "oacs": { 00:08:42.576 "security": 0, 00:08:42.576 "format": 0, 00:08:42.576 "firmware": 0, 00:08:42.576 "ns_manage": 0 00:08:42.576 }, 00:08:42.576 "multi_ctrlr": true, 00:08:42.576 "ana_reporting": false 00:08:42.576 }, 00:08:42.576 "vs": { 00:08:42.576 "nvme_version": "1.3" 00:08:42.576 }, 00:08:42.576 "ns_data": { 00:08:42.576 "id": 1, 00:08:42.576 "can_share": true 00:08:42.576 } 00:08:42.576 } 00:08:42.576 ], 00:08:42.576 "mp_policy": "active_passive" 00:08:42.576 } 00:08:42.576 } 00:08:42.576 ] 00:08:42.576 01:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2168822 00:08:42.576 01:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:42.576 01:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:42.836 Running I/O for 10 seconds... 00:08:43.777 Latency(us) 00:08:43.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.777 Nvme0n1 : 1.00 14293.00 55.83 0.00 0.00 0.00 0.00 0.00 00:08:43.777 =================================================================================================================== 00:08:43.777 Total : 14293.00 55.83 0.00 0.00 0.00 0.00 0.00 00:08:43.777 00:08:44.716 01:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6618168f-d20a-4a55-bd4a-89cdd0cd30db 00:08:44.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.716 Nvme0n1 : 2.00 14481.50 56.57 0.00 0.00 0.00 0.00 0.00 00:08:44.716 =================================================================================================================== 00:08:44.716 Total : 14481.50 56.57 0.00 0.00 0.00 0.00 0.00 00:08:44.716 00:08:44.976 true 00:08:44.976 01:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6618168f-d20a-4a55-bd4a-89cdd0cd30db 00:08:44.976 01:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:45.236 01:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:45.236 01:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:45.236 01:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2168822 00:08:45.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.804 Nvme0n1 : 3.00 14672.33 57.31 0.00 0.00 0.00 0.00 0.00 00:08:45.804 =================================================================================================================== 00:08:45.804 Total : 14672.33 57.31 0.00 0.00 0.00 0.00 0.00 00:08:45.804 00:08:46.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.745 Nvme0n1 : 4.00 14786.75 57.76 0.00 0.00 0.00 0.00 0.00 00:08:46.745 =================================================================================================================== 00:08:46.745 Total : 14786.75 57.76 0.00 0.00 0.00 0.00 0.00 00:08:46.745 00:08:47.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.685 Nvme0n1 : 5.00 14839.60 57.97 0.00 0.00 0.00 0.00 0.00 00:08:47.685 =================================================================================================================== 00:08:47.685 Total : 14839.60 57.97 0.00 0.00 0.00 0.00 0.00 00:08:47.685 00:08:48.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.624 Nvme0n1 : 6.00 14864.00 58.06 0.00 0.00 0.00 0.00 0.00 00:08:48.624 =================================================================================================================== 00:08:48.624 Total : 14864.00 58.06 0.00 0.00 0.00 0.00 0.00 00:08:48.624 00:08:50.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.019 Nvme0n1 : 7.00 14900.00 58.20 0.00 0.00 0.00 0.00 0.00 00:08:50.019 =================================================================================================================== 00:08:50.019 Total : 14900.00 58.20 0.00 0.00 0.00 0.00 0.00 00:08:50.019 00:08:50.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.636 Nvme0n1 : 8.00 14958.38 58.43 0.00 0.00 0.00 0.00 0.00 00:08:50.636 =================================================================================================================== 00:08:50.636 Total : 14958.38 58.43 0.00 0.00 0.00 0.00 0.00 00:08:50.636 00:08:52.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.017 Nvme0n1 : 9.00 15011.00 58.64 0.00 0.00 0.00 0.00 0.00 00:08:52.017 =================================================================================================================== 00:08:52.017 Total : 15011.00 58.64 0.00 0.00 0.00 0.00 0.00 00:08:52.017 00:08:52.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.954 Nvme0n1 : 10.00 15060.50 58.83 0.00 0.00 0.00 0.00 0.00 00:08:52.954 =================================================================================================================== 00:08:52.954 Total : 15060.50 58.83 0.00 0.00 0.00 0.00 0.00 00:08:52.954 00:08:52.954 00:08:52.954 Latency(us) 00:08:52.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.954 Nvme0n1 : 10.01 15059.38 58.83 0.00 0.00 8494.48 3568.07 16505.36 00:08:52.954 =================================================================================================================== 00:08:52.954 Total : 15059.38 58.83 0.00 0.00 8494.48 3568.07 16505.36 00:08:52.954 0 00:08:52.954 01:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2168684 00:08:52.954 01:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2168684 ']' 00:08:52.954 01:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2168684 00:08:52.954 01:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:52.954 01:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:52.954 01:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2168684 00:08:52.954 01:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:52.954 01:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:52.954 01:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2168684' 00:08:52.954 killing process with pid 2168684 00:08:52.954 01:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2168684 00:08:52.954 Received shutdown signal, test time was about 10.000000 seconds 00:08:52.954 00:08:52.954 Latency(us) 00:08:52.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.954 =================================================================================================================== 00:08:52.954 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:52.954 01:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2168684 00:08:52.954 01:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:53.212 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:53.470 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6618168f-d20a-4a55-bd4a-89cdd0cd30db 00:08:53.470 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2166184 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2166184 00:08:53.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2166184 Killed "${NVMF_APP[@]}" "$@" 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2170158 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2170158 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2170158 ']' 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.729 01:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:53.989 [2024-07-26 01:44:35.757122] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:08:53.989 [2024-07-26 01:44:35.757198] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.989 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.989 [2024-07-26 01:44:35.832866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.989 [2024-07-26 01:44:35.927692] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.989 [2024-07-26 01:44:35.927760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.989 [2024-07-26 01:44:35.927776] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.989 [2024-07-26 01:44:35.927790] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.989 [2024-07-26 01:44:35.927802] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.989 [2024-07-26 01:44:35.927834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.247 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.247 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:54.247 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.247 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:54.247 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:54.247 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.247 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.505 [2024-07-26 01:44:36.290280] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:54.505 [2024-07-26 01:44:36.290438] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:54.505 [2024-07-26 01:44:36.290497] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:54.505 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:54.505 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 74457fe2-6abb-44ec-b7b6-e0693750bac8 00:08:54.505 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=74457fe2-6abb-44ec-b7b6-e0693750bac8 00:08:54.505 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:54.505 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:54.505 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:54.505 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:54.505 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:54.764 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 74457fe2-6abb-44ec-b7b6-e0693750bac8 -t 2000 00:08:55.024 [ 00:08:55.024 { 00:08:55.024 "name": "74457fe2-6abb-44ec-b7b6-e0693750bac8", 00:08:55.024 "aliases": [ 00:08:55.024 "lvs/lvol" 00:08:55.024 ], 00:08:55.024 "product_name": "Logical Volume", 00:08:55.025 "block_size": 4096, 00:08:55.025 "num_blocks": 38912, 00:08:55.025 "uuid": "74457fe2-6abb-44ec-b7b6-e0693750bac8", 00:08:55.025 "assigned_rate_limits": { 00:08:55.025 "rw_ios_per_sec": 0, 00:08:55.025 "rw_mbytes_per_sec": 0, 00:08:55.025 "r_mbytes_per_sec": 0, 00:08:55.025 "w_mbytes_per_sec": 0 00:08:55.025 }, 00:08:55.025 "claimed": false, 00:08:55.025 "zoned": false, 00:08:55.025 "supported_io_types": { 00:08:55.025 "read": true, 00:08:55.025 "write": true, 00:08:55.025 "unmap": true, 00:08:55.025 "flush": false, 00:08:55.025 "reset": true, 00:08:55.025 "nvme_admin": false, 00:08:55.025 "nvme_io": false, 00:08:55.025 "nvme_io_md": false, 00:08:55.025 "write_zeroes": true, 00:08:55.025 "zcopy": false, 00:08:55.025 "get_zone_info": false, 00:08:55.025 "zone_management": false, 00:08:55.025 "zone_append": false, 00:08:55.025 "compare": false, 00:08:55.025 "compare_and_write": false, 00:08:55.025 "abort": false, 00:08:55.025 "seek_hole": true, 00:08:55.025 "seek_data": true, 00:08:55.025 "copy": false, 00:08:55.025 "nvme_iov_md": false 00:08:55.025 }, 00:08:55.025 "driver_specific": { 00:08:55.025 "lvol": { 00:08:55.025 "lvol_store_uuid": "6618168f-d20a-4a55-bd4a-89cdd0cd30db", 00:08:55.025 "base_bdev": "aio_bdev", 00:08:55.025 "thin_provision": false, 00:08:55.025 "num_allocated_clusters": 38, 00:08:55.025 "snapshot": false, 00:08:55.025 "clone": false, 00:08:55.025 "esnap_clone": false 00:08:55.025 } 00:08:55.025 } 00:08:55.025 } 00:08:55.025 ] 00:08:55.025 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:55.025 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6618168f-d20a-4a55-bd4a-89cdd0cd30db 00:08:55.025 01:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:55.284 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:55.284 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6618168f-d20a-4a55-bd4a-89cdd0cd30db 00:08:55.284 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:55.544 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:55.544 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:55.803 [2024-07-26 01:44:37.615274] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:55.803 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6618168f-d20a-4a55-bd4a-89cdd0cd30db 00:08:55.803 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:55.803 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6618168f-d20a-4a55-bd4a-89cdd0cd30db 00:08:55.803 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.803 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.803 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.803 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.803 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.803 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.803 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.803 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:55.803 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6618168f-d20a-4a55-bd4a-89cdd0cd30db 00:08:56.061 request: 00:08:56.061 { 00:08:56.061 "uuid": "6618168f-d20a-4a55-bd4a-89cdd0cd30db", 00:08:56.061 "method": "bdev_lvol_get_lvstores", 00:08:56.061 "req_id": 1 00:08:56.061 } 00:08:56.061 Got JSON-RPC error response 00:08:56.061 response: 00:08:56.061 { 00:08:56.061 "code": -19, 00:08:56.061 "message": "No such device" 00:08:56.061 } 00:08:56.061 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:56.061 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:56.061 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:56.061 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:56.061 01:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:56.319 aio_bdev 00:08:56.319 01:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 74457fe2-6abb-44ec-b7b6-e0693750bac8 00:08:56.319 01:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=74457fe2-6abb-44ec-b7b6-e0693750bac8 00:08:56.319 01:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.319 01:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:56.319 01:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.319 01:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.319 01:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:56.576 01:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 74457fe2-6abb-44ec-b7b6-e0693750bac8 -t 2000 00:08:56.833 [ 00:08:56.833 { 00:08:56.833 "name": "74457fe2-6abb-44ec-b7b6-e0693750bac8", 00:08:56.833 "aliases": [ 00:08:56.833 "lvs/lvol" 00:08:56.833 ], 00:08:56.833 "product_name": "Logical Volume", 00:08:56.833 "block_size": 4096, 00:08:56.834 "num_blocks": 38912, 00:08:56.834 "uuid": "74457fe2-6abb-44ec-b7b6-e0693750bac8", 00:08:56.834 "assigned_rate_limits": { 00:08:56.834 "rw_ios_per_sec": 0, 00:08:56.834 "rw_mbytes_per_sec": 0, 00:08:56.834 "r_mbytes_per_sec": 0, 00:08:56.834 "w_mbytes_per_sec": 0 00:08:56.834 }, 00:08:56.834 "claimed": false, 00:08:56.834 "zoned": false, 00:08:56.834 "supported_io_types": { 00:08:56.834 "read": true, 00:08:56.834 "write": true, 00:08:56.834 "unmap": true, 00:08:56.834 "flush": false, 00:08:56.834 "reset": true, 00:08:56.834 "nvme_admin": false, 00:08:56.834 "nvme_io": false, 00:08:56.834 "nvme_io_md": false, 00:08:56.834 "write_zeroes": true, 00:08:56.834 "zcopy": false, 00:08:56.834 "get_zone_info": false, 00:08:56.834 "zone_management": false, 00:08:56.834 "zone_append": false, 00:08:56.834 "compare": false, 00:08:56.834 "compare_and_write": false, 00:08:56.834 "abort": false, 00:08:56.834 "seek_hole": true, 00:08:56.834 "seek_data": true, 00:08:56.834 "copy": false, 00:08:56.834 "nvme_iov_md": false 00:08:56.834 }, 00:08:56.834 "driver_specific": { 00:08:56.834 "lvol": { 00:08:56.834 "lvol_store_uuid": "6618168f-d20a-4a55-bd4a-89cdd0cd30db", 00:08:56.834 "base_bdev": "aio_bdev", 00:08:56.834 "thin_provision": false, 00:08:56.834 "num_allocated_clusters": 38, 00:08:56.834 "snapshot": false, 00:08:56.834 "clone": false, 00:08:56.834 "esnap_clone": false 00:08:56.834 } 00:08:56.834 } 00:08:56.834 } 00:08:56.834 ] 00:08:56.834 01:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:56.834 01:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6618168f-d20a-4a55-bd4a-89cdd0cd30db 00:08:56.834 01:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:57.091 01:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:57.091 01:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6618168f-d20a-4a55-bd4a-89cdd0cd30db 00:08:57.091 01:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:57.350 01:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:57.350 01:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 74457fe2-6abb-44ec-b7b6-e0693750bac8 00:08:57.610 01:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6618168f-d20a-4a55-bd4a-89cdd0cd30db 00:08:57.868 01:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:58.126 01:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.126 00:08:58.126 real 0m18.914s 00:08:58.126 user 0m47.741s 00:08:58.126 sys 0m4.733s 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.126 ************************************ 00:08:58.126 END TEST lvs_grow_dirty 00:08:58.126 ************************************ 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:58.126 nvmf_trace.0 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.126 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.126 rmmod nvme_tcp 00:08:58.126 rmmod nvme_fabrics 00:08:58.126 rmmod nvme_keyring 00:08:58.384 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2170158 ']' 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2170158 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2170158 ']' 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2170158 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2170158 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2170158' 00:08:58.385 killing process with pid 2170158 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2170158 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2170158 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.385 01:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.920 00:09:00.920 real 0m41.730s 00:09:00.920 user 1m9.370s 00:09:00.920 sys 0m8.985s 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:00.920 ************************************ 00:09:00.920 END TEST nvmf_lvs_grow 00:09:00.920 ************************************ 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:00.920 ************************************ 00:09:00.920 START TEST nvmf_bdev_io_wait 00:09:00.920 ************************************ 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:00.920 * Looking for test storage... 00:09:00.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:00.920 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.921 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:00.921 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:00.921 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:00.921 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.921 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.921 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.921 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:00.921 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:00.921 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.921 01:44:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.823 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:02.824 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:02.824 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:02.824 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:02.824 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:02.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:09:02.824 00:09:02.824 --- 10.0.0.2 ping statistics --- 00:09:02.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.824 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:09:02.824 00:09:02.824 --- 10.0.0.1 ping statistics --- 00:09:02.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.824 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2172675 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2172675 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2172675 ']' 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.824 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:02.824 [2024-07-26 01:44:44.685179] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:09:02.824 [2024-07-26 01:44:44.685271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.824 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.824 [2024-07-26 01:44:44.756146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.082 [2024-07-26 01:44:44.848796] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.082 [2024-07-26 01:44:44.848860] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.082 [2024-07-26 01:44:44.848886] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.082 [2024-07-26 01:44:44.848900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.082 [2024-07-26 01:44:44.848911] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.082 [2024-07-26 01:44:44.848993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.082 [2024-07-26 01:44:44.849087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.082 [2024-07-26 01:44:44.849169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.082 [2024-07-26 01:44:44.849172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.083 01:44:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.083 [2024-07-26 01:44:45.000921] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.083 Malloc0 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.083 [2024-07-26 01:44:45.061846] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2172709 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2172711 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:03.083 { 00:09:03.083 "params": { 00:09:03.083 "name": "Nvme$subsystem", 00:09:03.083 "trtype": "$TEST_TRANSPORT", 00:09:03.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.083 "adrfam": "ipv4", 00:09:03.083 "trsvcid": "$NVMF_PORT", 00:09:03.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.083 "hdgst": ${hdgst:-false}, 00:09:03.083 "ddgst": ${ddgst:-false} 00:09:03.083 }, 00:09:03.083 "method": "bdev_nvme_attach_controller" 00:09:03.083 } 00:09:03.083 EOF 00:09:03.083 )") 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2172713 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:03.083 { 00:09:03.083 "params": { 00:09:03.083 "name": "Nvme$subsystem", 00:09:03.083 "trtype": "$TEST_TRANSPORT", 00:09:03.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.083 "adrfam": "ipv4", 00:09:03.083 "trsvcid": "$NVMF_PORT", 00:09:03.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.083 "hdgst": ${hdgst:-false}, 00:09:03.083 "ddgst": ${ddgst:-false} 00:09:03.083 }, 00:09:03.083 "method": "bdev_nvme_attach_controller" 00:09:03.083 } 00:09:03.083 EOF 00:09:03.083 )") 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2172716 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:03.083 { 00:09:03.083 "params": { 00:09:03.083 "name": "Nvme$subsystem", 00:09:03.083 "trtype": "$TEST_TRANSPORT", 00:09:03.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.083 "adrfam": "ipv4", 00:09:03.083 "trsvcid": "$NVMF_PORT", 00:09:03.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.083 "hdgst": ${hdgst:-false}, 00:09:03.083 "ddgst": ${ddgst:-false} 00:09:03.083 }, 00:09:03.083 "method": "bdev_nvme_attach_controller" 00:09:03.083 } 00:09:03.083 EOF 00:09:03.083 )") 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:03.083 { 00:09:03.083 "params": { 00:09:03.083 "name": "Nvme$subsystem", 00:09:03.083 "trtype": "$TEST_TRANSPORT", 00:09:03.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.083 "adrfam": "ipv4", 00:09:03.083 "trsvcid": "$NVMF_PORT", 00:09:03.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.083 "hdgst": ${hdgst:-false}, 00:09:03.083 "ddgst": ${ddgst:-false} 00:09:03.083 }, 00:09:03.083 "method": "bdev_nvme_attach_controller" 00:09:03.083 } 00:09:03.083 EOF 00:09:03.083 )") 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2172709 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:03.083 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:03.083 "params": { 00:09:03.084 "name": "Nvme1", 00:09:03.084 "trtype": "tcp", 00:09:03.084 "traddr": "10.0.0.2", 00:09:03.084 "adrfam": "ipv4", 00:09:03.084 "trsvcid": "4420", 00:09:03.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.084 "hdgst": false, 00:09:03.084 "ddgst": false 00:09:03.084 }, 00:09:03.084 "method": "bdev_nvme_attach_controller" 00:09:03.084 }' 00:09:03.084 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:03.084 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:03.084 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:03.084 "params": { 00:09:03.084 "name": "Nvme1", 00:09:03.084 "trtype": "tcp", 00:09:03.084 "traddr": "10.0.0.2", 00:09:03.084 "adrfam": "ipv4", 00:09:03.084 "trsvcid": "4420", 00:09:03.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.084 "hdgst": false, 00:09:03.084 "ddgst": false 00:09:03.084 }, 00:09:03.084 "method": "bdev_nvme_attach_controller" 00:09:03.084 }' 00:09:03.084 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:03.084 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:03.084 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:03.084 "params": { 00:09:03.084 "name": "Nvme1", 00:09:03.084 "trtype": "tcp", 00:09:03.084 "traddr": "10.0.0.2", 00:09:03.084 "adrfam": "ipv4", 00:09:03.084 "trsvcid": "4420", 00:09:03.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.084 "hdgst": false, 00:09:03.084 "ddgst": false 00:09:03.084 }, 00:09:03.084 "method": "bdev_nvme_attach_controller" 00:09:03.084 }' 00:09:03.084 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:03.084 01:44:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:03.084 "params": { 00:09:03.084 "name": "Nvme1", 00:09:03.084 "trtype": "tcp", 00:09:03.084 "traddr": "10.0.0.2", 00:09:03.084 "adrfam": "ipv4", 00:09:03.084 "trsvcid": "4420", 00:09:03.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.084 "hdgst": false, 00:09:03.084 "ddgst": false 00:09:03.084 }, 00:09:03.084 "method": "bdev_nvme_attach_controller" 00:09:03.084 }' 00:09:03.342 [2024-07-26 01:44:45.107968] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:09:03.342 [2024-07-26 01:44:45.107967] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:09:03.342 [2024-07-26 01:44:45.108056] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 01:44:45.108056] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:03.342 --proc-type=auto ] 00:09:03.342 [2024-07-26 01:44:45.109784] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:09:03.342 [2024-07-26 01:44:45.109786] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:09:03.342 [2024-07-26 01:44:45.109865] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 01:44:45.109865] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:03.342 --proc-type=auto ] 00:09:03.342 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.342 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.342 [2024-07-26 01:44:45.283830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.600 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.600 [2024-07-26 01:44:45.359305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:03.600 [2024-07-26 01:44:45.384506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.600 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.600 [2024-07-26 01:44:45.459552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.600 [2024-07-26 01:44:45.463710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:03.600 [2024-07-26 01:44:45.526815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:03.600 [2024-07-26 01:44:45.535185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.600 [2024-07-26 01:44:45.603869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:03.859 Running I/O for 1 seconds... 00:09:03.859 Running I/O for 1 seconds... 00:09:03.859 Running I/O for 1 seconds... 00:09:03.859 Running I/O for 1 seconds... 00:09:04.798 00:09:04.798 Latency(us) 00:09:04.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.798 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:04.798 Nvme1n1 : 1.06 5630.21 21.99 0.00 0.00 21673.31 10825.58 61749.48 00:09:04.798 =================================================================================================================== 00:09:04.798 Total : 5630.21 21.99 0.00 0.00 21673.31 10825.58 61749.48 00:09:04.798 00:09:04.798 Latency(us) 00:09:04.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.798 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:04.798 Nvme1n1 : 1.00 165199.74 645.31 0.00 0.00 771.79 317.06 983.04 00:09:04.798 =================================================================================================================== 00:09:04.798 Total : 165199.74 645.31 0.00 0.00 771.79 317.06 983.04 00:09:04.798 00:09:04.798 Latency(us) 00:09:04.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.798 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:04.798 Nvme1n1 : 1.01 5930.27 23.17 0.00 0.00 21514.76 5752.60 42525.58 00:09:04.798 =================================================================================================================== 00:09:04.798 Total : 5930.27 23.17 0.00 0.00 21514.76 5752.60 42525.58 00:09:05.057 00:09:05.057 Latency(us) 00:09:05.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.057 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:05.057 Nvme1n1 : 1.01 9185.81 35.88 0.00 0.00 13872.87 7670.14 26602.76 00:09:05.057 =================================================================================================================== 00:09:05.057 Total : 9185.81 35.88 0.00 0.00 13872.87 7670.14 26602.76 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2172711 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2172713 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2172716 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:05.317 rmmod nvme_tcp 00:09:05.317 rmmod nvme_fabrics 00:09:05.317 rmmod nvme_keyring 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2172675 ']' 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2172675 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2172675 ']' 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2172675 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2172675 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2172675' 00:09:05.317 killing process with pid 2172675 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2172675 00:09:05.317 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2172675 00:09:05.577 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:05.577 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:05.577 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:05.577 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:05.577 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:05.577 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.577 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.577 01:44:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:08.115 00:09:08.115 real 0m7.082s 00:09:08.115 user 0m15.964s 00:09:08.115 sys 0m3.580s 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.115 ************************************ 00:09:08.115 END TEST nvmf_bdev_io_wait 00:09:08.115 ************************************ 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.115 ************************************ 00:09:08.115 START TEST nvmf_queue_depth 00:09:08.115 ************************************ 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:08.115 * Looking for test storage... 00:09:08.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.115 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:08.116 01:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.500 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.500 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:09.500 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:09.500 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:09.500 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:09.500 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:09.500 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:09.500 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:09.500 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:09.500 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:09.500 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:09.501 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:09.501 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:09.501 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:09.501 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.501 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:09.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:09:09.762 00:09:09.762 --- 10.0.0.2 ping statistics --- 00:09:09.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.762 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:09:09.762 00:09:09.762 --- 10.0.0.1 ping statistics --- 00:09:09.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.762 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2174929 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2174929 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2174929 ']' 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.762 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.762 [2024-07-26 01:44:51.676495] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:09:09.762 [2024-07-26 01:44:51.676572] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.762 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.762 [2024-07-26 01:44:51.745105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.021 [2024-07-26 01:44:51.839081] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.021 [2024-07-26 01:44:51.839132] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.021 [2024-07-26 01:44:51.839148] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.021 [2024-07-26 01:44:51.839163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.021 [2024-07-26 01:44:51.839174] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.021 [2024-07-26 01:44:51.839203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.021 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.021 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:10.021 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:10.022 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:10.022 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.022 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.022 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.022 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.022 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.022 [2024-07-26 01:44:51.986768] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.022 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.022 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:10.022 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.022 01:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.022 Malloc0 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.281 [2024-07-26 01:44:52.052551] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2174954 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2174954 /var/tmp/bdevperf.sock 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2174954 ']' 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:10.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.281 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.281 [2024-07-26 01:44:52.099488] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:09:10.281 [2024-07-26 01:44:52.099563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174954 ] 00:09:10.281 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.281 [2024-07-26 01:44:52.160757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.281 [2024-07-26 01:44:52.251204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.539 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.539 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:10.539 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:10.539 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.539 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.539 NVMe0n1 00:09:10.539 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.539 01:44:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:10.799 Running I/O for 10 seconds... 00:09:20.806 00:09:20.806 Latency(us) 00:09:20.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.806 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:20.806 Verification LBA range: start 0x0 length 0x4000 00:09:20.806 NVMe0n1 : 10.09 8405.86 32.84 0.00 0.00 121288.84 24466.77 76118.85 00:09:20.806 =================================================================================================================== 00:09:20.806 Total : 8405.86 32.84 0.00 0.00 121288.84 24466.77 76118.85 00:09:20.806 0 00:09:20.806 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2174954 00:09:20.806 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2174954 ']' 00:09:20.806 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2174954 00:09:20.806 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:20.806 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.806 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2174954 00:09:20.806 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.806 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.806 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2174954' 00:09:20.806 killing process with pid 2174954 00:09:20.806 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2174954 00:09:20.806 Received shutdown signal, test time was about 10.000000 seconds 00:09:20.806 00:09:20.806 Latency(us) 00:09:20.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.806 =================================================================================================================== 00:09:20.806 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:20.806 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2174954 00:09:21.066 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:21.066 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:21.066 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:21.066 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:21.066 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:21.066 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:21.066 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:21.066 01:45:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:21.066 rmmod nvme_tcp 00:09:21.066 rmmod nvme_fabrics 00:09:21.066 rmmod nvme_keyring 00:09:21.066 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:21.066 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:21.066 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:21.066 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2174929 ']' 00:09:21.066 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2174929 00:09:21.066 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2174929 ']' 00:09:21.066 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2174929 00:09:21.066 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:21.066 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.066 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2174929 00:09:21.066 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:21.066 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:21.066 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2174929' 00:09:21.066 killing process with pid 2174929 00:09:21.066 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2174929 00:09:21.066 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2174929 00:09:21.325 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:21.325 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:21.325 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:21.325 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:21.325 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:21.325 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.325 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.325 01:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:23.862 00:09:23.862 real 0m15.767s 00:09:23.862 user 0m22.381s 00:09:23.862 sys 0m2.897s 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.862 ************************************ 00:09:23.862 END TEST nvmf_queue_depth 00:09:23.862 ************************************ 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:23.862 ************************************ 00:09:23.862 START TEST nvmf_target_multipath 00:09:23.862 ************************************ 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:23.862 * Looking for test storage... 00:09:23.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:23.862 01:45:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:25.767 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:25.767 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:25.767 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:25.767 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.767 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:25.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:09:25.768 00:09:25.768 --- 10.0.0.2 ping statistics --- 00:09:25.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.768 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:09:25.768 00:09:25.768 --- 10.0.0.1 ping statistics --- 00:09:25.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.768 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:25.768 only one NIC for nvmf test 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:25.768 rmmod nvme_tcp 00:09:25.768 rmmod nvme_fabrics 00:09:25.768 rmmod nvme_keyring 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.768 01:45:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:28.307 00:09:28.307 real 0m4.379s 00:09:28.307 user 0m0.833s 00:09:28.307 sys 0m1.540s 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.307 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:28.307 ************************************ 00:09:28.307 END TEST nvmf_target_multipath 00:09:28.308 ************************************ 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.308 ************************************ 00:09:28.308 START TEST nvmf_zcopy 00:09:28.308 ************************************ 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:28.308 * Looking for test storage... 00:09:28.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:28.308 01:45:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:30.215 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:30.215 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:30.215 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.215 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:30.215 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.216 01:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:30.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:09:30.216 00:09:30.216 --- 10.0.0.2 ping statistics --- 00:09:30.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.216 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:09:30.216 00:09:30.216 --- 10.0.0.1 ping statistics --- 00:09:30.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.216 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2180753 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2180753 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2180753 ']' 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.216 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.216 [2024-07-26 01:45:12.091338] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:09:30.216 [2024-07-26 01:45:12.091446] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.216 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.216 [2024-07-26 01:45:12.165415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.475 [2024-07-26 01:45:12.253633] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.475 [2024-07-26 01:45:12.253692] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.475 [2024-07-26 01:45:12.253705] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.475 [2024-07-26 01:45:12.253716] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.476 [2024-07-26 01:45:12.253725] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.476 [2024-07-26 01:45:12.253752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.476 [2024-07-26 01:45:12.402370] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.476 [2024-07-26 01:45:12.418594] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.476 malloc0 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:30.476 { 00:09:30.476 "params": { 00:09:30.476 "name": "Nvme$subsystem", 00:09:30.476 "trtype": "$TEST_TRANSPORT", 00:09:30.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.476 "adrfam": "ipv4", 00:09:30.476 "trsvcid": "$NVMF_PORT", 00:09:30.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.476 "hdgst": ${hdgst:-false}, 00:09:30.476 "ddgst": ${ddgst:-false} 00:09:30.476 }, 00:09:30.476 "method": "bdev_nvme_attach_controller" 00:09:30.476 } 00:09:30.476 EOF 00:09:30.476 )") 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:30.476 01:45:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:30.476 "params": { 00:09:30.476 "name": "Nvme1", 00:09:30.476 "trtype": "tcp", 00:09:30.476 "traddr": "10.0.0.2", 00:09:30.476 "adrfam": "ipv4", 00:09:30.476 "trsvcid": "4420", 00:09:30.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:30.476 "hdgst": false, 00:09:30.476 "ddgst": false 00:09:30.476 }, 00:09:30.476 "method": "bdev_nvme_attach_controller" 00:09:30.476 }' 00:09:30.734 [2024-07-26 01:45:12.515046] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:09:30.734 [2024-07-26 01:45:12.515151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180773 ] 00:09:30.734 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.734 [2024-07-26 01:45:12.583084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.734 [2024-07-26 01:45:12.666017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.995 Running I/O for 10 seconds... 00:09:43.214 00:09:43.214 Latency(us) 00:09:43.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.214 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:43.214 Verification LBA range: start 0x0 length 0x1000 00:09:43.214 Nvme1n1 : 10.02 5849.35 45.70 0.00 0.00 21824.02 3737.98 29127.11 00:09:43.214 =================================================================================================================== 00:09:43.214 Total : 5849.35 45.70 0.00 0.00 21824.02 3737.98 29127.11 00:09:43.214 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2182095 00:09:43.214 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:43.214 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.214 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:43.214 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:43.214 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:43.214 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:43.214 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:43.214 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:43.214 { 00:09:43.214 "params": { 00:09:43.214 "name": "Nvme$subsystem", 00:09:43.214 "trtype": "$TEST_TRANSPORT", 00:09:43.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.214 "adrfam": "ipv4", 00:09:43.214 "trsvcid": "$NVMF_PORT", 00:09:43.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.214 "hdgst": ${hdgst:-false}, 00:09:43.214 "ddgst": ${ddgst:-false} 00:09:43.214 }, 00:09:43.214 "method": "bdev_nvme_attach_controller" 00:09:43.214 } 00:09:43.214 EOF 00:09:43.214 )") 00:09:43.214 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:43.215 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:43.215 [2024-07-26 01:45:23.233011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.233065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:43.215 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:43.215 "params": { 00:09:43.215 "name": "Nvme1", 00:09:43.215 "trtype": "tcp", 00:09:43.215 "traddr": "10.0.0.2", 00:09:43.215 "adrfam": "ipv4", 00:09:43.215 "trsvcid": "4420", 00:09:43.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.215 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.215 "hdgst": false, 00:09:43.215 "ddgst": false 00:09:43.215 }, 00:09:43.215 "method": "bdev_nvme_attach_controller" 00:09:43.215 }' 00:09:43.215 [2024-07-26 01:45:23.240980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.241008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.248986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.249007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.257004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.257024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.265026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.265067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.268880] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:09:43.215 [2024-07-26 01:45:23.268953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182095 ] 00:09:43.215 [2024-07-26 01:45:23.273074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.273121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.281095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.281132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.289137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.289158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.297144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.297165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.215 [2024-07-26 01:45:23.305172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.305200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.313188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.313209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.321203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.321224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.329225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.329246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.334099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.215 [2024-07-26 01:45:23.337247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.337268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.345297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.345348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.353294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.353317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.361309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.361357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.369348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.369372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.377369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.377389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.385405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.385433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.393448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.393486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.401440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.401464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.409462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.409487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.417486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.417510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.425507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.425532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.428854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.215 [2024-07-26 01:45:23.433529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.433553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.441555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.441579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.449595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.449629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.457622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.457659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.465642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.465681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.473662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.473698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.481686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.481723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.489709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.489748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.497718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.497748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.505738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.505768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.513773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.513810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.521795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.521832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.529799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.529822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.537822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.537849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.545863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.545894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.553877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.553905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.561899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.561928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.569923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.569950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.577941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.577966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.585964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.585989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.215 [2024-07-26 01:45:23.593988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.215 [2024-07-26 01:45:23.594013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.602009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.602033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.610035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.610071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.618067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.618095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.626087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.626128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.634102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.634139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.642136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.642157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.650153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.650174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.658170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.658190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.666193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.666217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.674207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.674227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.682229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.682250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.690248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.690267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.698271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.698292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.706297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.706319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.714318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.714358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.722360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.722385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.730387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.730412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.738412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.738437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.746440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.746464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.754462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.754490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.763382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.763412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.770512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.770540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 Running I/O for 5 seconds... 00:09:43.216 [2024-07-26 01:45:23.778530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.778555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.793668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.793700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.804987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.805014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.818016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.818047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.828081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.828113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.840110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.840137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.851257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.851285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.864426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.864457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.875530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.875558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.886524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.886555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.899480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.899511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.910108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.910135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.920860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.920891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.932256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.932285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.943230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.943258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.956164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.956190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.966219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.966247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.977456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.977482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:23.990535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:23.990565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:24.000775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:24.000805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:24.012108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:24.012135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:24.022871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:24.022902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:24.033871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:24.033898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:24.046353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:24.046385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:24.058074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:24.058118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:24.067171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:24.067198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:24.079488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:24.079515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:24.090344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:24.090387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.216 [2024-07-26 01:45:24.101076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.216 [2024-07-26 01:45:24.101103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.112091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.112130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.125500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.125529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.135857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.135889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.146778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.146806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.157563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.157590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.168920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.168947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.182093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.182121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.192218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.192245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.203335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.203378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.216282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.216310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.227199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.227244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.238574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.238605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.251690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.251718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.262392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.262433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.273716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.273744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.286813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.286841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.296760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.296791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.307848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.307878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.320999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.321030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.331598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.331629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.342936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.342966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.354840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.354870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.366050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.366099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.377232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.377260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.387940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.387970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.399412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.399440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.410587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.410619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.422137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.422164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.433170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.433197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.444140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.444175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.455436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.455467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.466977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.467007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.479951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.479992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.490088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.490129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.501610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.501640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.513156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.513183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.524516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.524547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.535557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.535588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.548588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.548619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.559177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.559204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.570009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.570040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.581067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.581094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.592241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.592268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.604979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.605009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.615397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.615427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.627249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.627276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.638569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.638600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.651819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.651849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.661922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.661952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.672984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.673010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.683898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.683929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.694932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.694970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.217 [2024-07-26 01:45:24.705646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.217 [2024-07-26 01:45:24.705676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.716679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.716709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.729658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.729689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.740201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.740229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.751113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.751140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.764247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.764275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.774817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.774848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.786069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.786113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.799122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.799148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.809773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.809803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.820763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.820789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.833853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.833880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.843984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.844014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.855422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.855453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.866737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.866769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.877993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.878024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.889255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.889282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.902132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.902163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.913057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.913091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.924037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.924085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.937016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.937046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.946676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.946707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.958385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.958426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.969371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.969397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.980313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.980339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:24.993290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:24.993316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.003399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.003426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.014499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.014530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.025275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.025302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.036446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.036477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.047873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.047904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.059158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.059197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.071931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.071961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.082235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.082261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.093440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.093467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.104399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.104430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.115638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.115669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.126520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.126547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.137792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.137822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.148975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.149004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.160590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.160620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.171832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.171863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.183075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.183102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.194417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.194443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.205415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.205442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.218 [2024-07-26 01:45:25.218402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.218 [2024-07-26 01:45:25.218432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.477 [2024-07-26 01:45:25.228724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.477 [2024-07-26 01:45:25.228755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.477 [2024-07-26 01:45:25.239981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.477 [2024-07-26 01:45:25.240008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.477 [2024-07-26 01:45:25.251173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.477 [2024-07-26 01:45:25.251200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.477 [2024-07-26 01:45:25.262128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.477 [2024-07-26 01:45:25.262156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.477 [2024-07-26 01:45:25.275166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.477 [2024-07-26 01:45:25.275195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.477 [2024-07-26 01:45:25.285750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.285787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.296691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.296722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.309722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.309753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.320121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.320147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.331134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.331161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.343919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.343950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.354597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.354628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.365878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.365905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.377376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.377408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.388625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.388656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.399577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.399608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.412453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.412497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.422617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.422648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.434341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.434367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.445882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.445912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.456614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.456644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.467672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.467704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.478 [2024-07-26 01:45:25.480675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.478 [2024-07-26 01:45:25.480707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.491213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.491241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.502377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.502404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.513613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.513644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.525007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.525038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.536183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.536211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.547126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.547162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.558382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.558410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.569644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.569675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.582733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.582764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.592648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.592680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.604712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.604743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.616085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.616119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.628830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.628861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.638944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.638974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.650610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.650641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.664042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.664081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.674849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.674876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.686216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.686243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.697101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.697129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.708506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.708536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.719771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.719801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.730923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.730953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.738 [2024-07-26 01:45:25.741683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.738 [2024-07-26 01:45:25.741710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.752738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.752770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.765674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.765716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.775492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.775519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.787344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.787386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.798492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.798523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.809857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.809887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.820840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.820867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.832186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.832213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.843309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.843337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.854121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.854149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.864859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.864885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.875541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.875573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.886424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.886451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.897090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.897125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.907913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.907940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.918905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.918932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.929701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.929727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.940893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.940924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.954199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.954225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.964589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.964619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.975897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.975935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.986828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.986859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:25.998170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:25.998197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.999 [2024-07-26 01:45:26.009174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.999 [2024-07-26 01:45:26.009201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.260 [2024-07-26 01:45:26.020616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.260 [2024-07-26 01:45:26.020647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.260 [2024-07-26 01:45:26.031695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.260 [2024-07-26 01:45:26.031725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.260 [2024-07-26 01:45:26.044815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.260 [2024-07-26 01:45:26.044845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.260 [2024-07-26 01:45:26.055842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.260 [2024-07-26 01:45:26.055873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.260 [2024-07-26 01:45:26.067618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.260 [2024-07-26 01:45:26.067649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.260 [2024-07-26 01:45:26.078446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.260 [2024-07-26 01:45:26.078473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.260 [2024-07-26 01:45:26.089373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.260 [2024-07-26 01:45:26.089399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.260 [2024-07-26 01:45:26.102403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.260 [2024-07-26 01:45:26.102434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.260 [2024-07-26 01:45:26.112856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.260 [2024-07-26 01:45:26.112887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.260 [2024-07-26 01:45:26.123919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.260 [2024-07-26 01:45:26.123946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.260 [2024-07-26 01:45:26.136917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.260 [2024-07-26 01:45:26.136947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.260 [2024-07-26 01:45:26.147031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.260 [2024-07-26 01:45:26.147082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.260 [2024-07-26 01:45:26.158488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.260 [2024-07-26 01:45:26.158515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.260 [2024-07-26 01:45:26.169655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.260 [2024-07-26 01:45:26.169686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.260 [2024-07-26 01:45:26.183384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.260 [2024-07-26 01:45:26.183410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.260 [2024-07-26 01:45:26.194417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.261 [2024-07-26 01:45:26.194452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.261 [2024-07-26 01:45:26.205522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.261 [2024-07-26 01:45:26.205553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.261 [2024-07-26 01:45:26.218412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.261 [2024-07-26 01:45:26.218438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.261 [2024-07-26 01:45:26.228211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.261 [2024-07-26 01:45:26.228239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.261 [2024-07-26 01:45:26.239766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.261 [2024-07-26 01:45:26.239794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.261 [2024-07-26 01:45:26.250448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.261 [2024-07-26 01:45:26.250474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.261 [2024-07-26 01:45:26.260916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.261 [2024-07-26 01:45:26.260943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.521 [2024-07-26 01:45:26.272302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.521 [2024-07-26 01:45:26.272331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.521 [2024-07-26 01:45:26.283135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.521 [2024-07-26 01:45:26.283162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.521 [2024-07-26 01:45:26.294560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.521 [2024-07-26 01:45:26.294590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.521 [2024-07-26 01:45:26.305324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.521 [2024-07-26 01:45:26.305350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.521 [2024-07-26 01:45:26.316118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.521 [2024-07-26 01:45:26.316145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.521 [2024-07-26 01:45:26.326849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.521 [2024-07-26 01:45:26.326875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.521 [2024-07-26 01:45:26.340084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.521 [2024-07-26 01:45:26.340110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.521 [2024-07-26 01:45:26.350599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.521 [2024-07-26 01:45:26.350626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.521 [2024-07-26 01:45:26.361553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.521 [2024-07-26 01:45:26.361584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.521 [2024-07-26 01:45:26.372704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.521 [2024-07-26 01:45:26.372731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.521 [2024-07-26 01:45:26.383799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.521 [2024-07-26 01:45:26.383825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.521 [2024-07-26 01:45:26.397350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.521 [2024-07-26 01:45:26.397377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.521 [2024-07-26 01:45:26.409674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.521 [2024-07-26 01:45:26.409713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.521 [2024-07-26 01:45:26.418947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.521 [2024-07-26 01:45:26.418975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.521 [2024-07-26 01:45:26.430543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.521 [2024-07-26 01:45:26.430571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.521 [2024-07-26 01:45:26.441609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.521 [2024-07-26 01:45:26.441636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.522 [2024-07-26 01:45:26.452625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.522 [2024-07-26 01:45:26.452663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.522 [2024-07-26 01:45:26.465586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.522 [2024-07-26 01:45:26.465617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.522 [2024-07-26 01:45:26.476327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.522 [2024-07-26 01:45:26.476355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.522 [2024-07-26 01:45:26.487568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.522 [2024-07-26 01:45:26.487595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.522 [2024-07-26 01:45:26.498457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.522 [2024-07-26 01:45:26.498483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.522 [2024-07-26 01:45:26.509463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.522 [2024-07-26 01:45:26.509490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.522 [2024-07-26 01:45:26.522134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.522 [2024-07-26 01:45:26.522162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.522 [2024-07-26 01:45:26.532685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.522 [2024-07-26 01:45:26.532716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.543739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.543766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.556233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.556264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.565849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.565880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.577371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.577398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.588653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.588683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.599780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.599811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.613113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.613156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.624150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.624177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.635135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.635162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.647392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.647418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.657639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.657666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.668452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.668479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.681424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.681451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.692117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.692145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.702973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.703000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.713699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.713726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.724627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.724654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.737030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.737081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.746977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.747004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.827 [2024-07-26 01:45:26.757676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.827 [2024-07-26 01:45:26.757702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.828 [2024-07-26 01:45:26.768551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.828 [2024-07-26 01:45:26.768577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.828 [2024-07-26 01:45:26.781283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.828 [2024-07-26 01:45:26.781310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.828 [2024-07-26 01:45:26.791644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.828 [2024-07-26 01:45:26.791671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.802619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.802648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.813695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.813721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.824967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.824994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.837919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.837946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.848698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.848726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.859297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.859326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.871799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.871827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.883147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.883176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.891999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.892027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.903329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.903357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.915810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.915837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.925885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.925912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.936724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.936752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.948860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.948887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.958830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.958858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.969240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.969268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.979863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.979891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:26.992206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:26.992233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:27.002416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:27.002443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:27.012926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:27.012954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:27.025489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:27.025517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:27.035252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:27.035301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:27.045695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:27.045722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:27.056252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:27.056279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:27.066743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:27.066770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:27.079501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:27.079529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:27.089514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:27.089541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.091 [2024-07-26 01:45:27.100009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.091 [2024-07-26 01:45:27.100036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.110350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.110378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.120726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.120753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.131159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.131187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.144624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.144651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.155129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.155157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.166110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.166137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.176742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.176768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.187526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.187552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.198994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.199021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.210029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.210081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.220626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.220654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.231572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.231600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.242395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.242432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.253377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.253404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.264027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.264081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.275080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.275108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.287266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.287294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.296747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.296774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.308814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.308841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.319499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.319527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.330721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.330749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.341731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.341759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.352148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.352176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.352 [2024-07-26 01:45:27.362738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.352 [2024-07-26 01:45:27.362766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.373497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.373526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.384359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.384387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.395067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.395095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.407466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.407494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.417804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.417831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.428448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.428475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.441300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.441328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.451723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.451763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.463007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.463034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.473885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.473912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.485111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.485139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.495933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.495960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.508332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.508373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.518392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.518419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.529551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.529593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.542807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.542834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.553171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.553201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.564055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.564092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.576756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.611 [2024-07-26 01:45:27.576785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.611 [2024-07-26 01:45:27.587077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.612 [2024-07-26 01:45:27.587105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.612 [2024-07-26 01:45:27.598110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.612 [2024-07-26 01:45:27.598141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.612 [2024-07-26 01:45:27.610599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.612 [2024-07-26 01:45:27.610626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.612 [2024-07-26 01:45:27.621248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.612 [2024-07-26 01:45:27.621277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.632383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.632411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.645391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.645419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.655998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.656026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.666688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.666723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.679514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.679543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.690246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.690273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.700825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.700853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.711451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.711479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.722393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.722421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.735415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.735461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.745746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.745773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.756419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.756447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.769385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.769412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.779490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.779517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.790743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.790770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.801182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.801210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.811920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.811947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.822464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.822491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.834008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.834049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.845228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.845256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.856517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.870 [2024-07-26 01:45:27.856545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.870 [2024-07-26 01:45:27.869469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.871 [2024-07-26 01:45:27.869496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.871 [2024-07-26 01:45:27.879728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.871 [2024-07-26 01:45:27.879779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:27.890816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:27.890843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:27.903623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:27.903651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:27.913948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:27.913976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:27.925392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:27.925420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:27.935973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:27.936000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:27.946701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:27.946728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:27.959120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:27.959148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:27.969158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:27.969186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:27.980358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:27.980385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:27.992954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:27.992982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:28.003276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:28.003305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:28.014287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:28.014315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:28.026996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:28.027023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:28.037449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:28.037477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:28.048577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:28.048604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:28.061137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:28.061165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:28.071118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:28.071146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:28.082468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:28.082495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:28.093399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:28.093452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:28.112265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:28.112295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:28.122627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:28.122655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.131 [2024-07-26 01:45:28.133534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.131 [2024-07-26 01:45:28.133561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.144388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.144416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.155708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.155738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.168797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.168824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.179402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.179429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.190295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.190323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.201335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.201363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.212382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.212409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.224728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.224755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.234584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.234611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.245874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.245901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.258803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.258829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.269422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.269449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.280148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.280176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.293005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.293033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.303147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.303174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.313857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.313883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.326387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.326414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.336439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.336466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.347424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.347452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.358268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.358296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.368925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.368967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.379195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.379223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.390 [2024-07-26 01:45:28.390405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.390 [2024-07-26 01:45:28.390432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.650 [2024-07-26 01:45:28.403293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.650 [2024-07-26 01:45:28.403321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.650 [2024-07-26 01:45:28.413566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.650 [2024-07-26 01:45:28.413593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.650 [2024-07-26 01:45:28.423986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.650 [2024-07-26 01:45:28.424014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.650 [2024-07-26 01:45:28.435266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.650 [2024-07-26 01:45:28.435294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.650 [2024-07-26 01:45:28.446485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.650 [2024-07-26 01:45:28.446512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.650 [2024-07-26 01:45:28.459329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.650 [2024-07-26 01:45:28.459358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.650 [2024-07-26 01:45:28.469527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.650 [2024-07-26 01:45:28.469555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.650 [2024-07-26 01:45:28.480538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.650 [2024-07-26 01:45:28.480566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.650 [2024-07-26 01:45:28.492640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.650 [2024-07-26 01:45:28.492668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.650 [2024-07-26 01:45:28.502540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.650 [2024-07-26 01:45:28.502568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.650 [2024-07-26 01:45:28.513402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.650 [2024-07-26 01:45:28.513429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.650 [2024-07-26 01:45:28.524307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.650 [2024-07-26 01:45:28.524349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.650 [2024-07-26 01:45:28.536996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.651 [2024-07-26 01:45:28.537027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.651 [2024-07-26 01:45:28.548917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.651 [2024-07-26 01:45:28.548959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.651 [2024-07-26 01:45:28.558525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.651 [2024-07-26 01:45:28.558552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.651 [2024-07-26 01:45:28.570194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.651 [2024-07-26 01:45:28.570222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.651 [2024-07-26 01:45:28.580983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.651 [2024-07-26 01:45:28.581010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.651 [2024-07-26 01:45:28.591420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.651 [2024-07-26 01:45:28.591447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.651 [2024-07-26 01:45:28.602468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.651 [2024-07-26 01:45:28.602495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.651 [2024-07-26 01:45:28.612960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.651 [2024-07-26 01:45:28.613004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.651 [2024-07-26 01:45:28.623530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.651 [2024-07-26 01:45:28.623557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.651 [2024-07-26 01:45:28.634181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.651 [2024-07-26 01:45:28.634209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.651 [2024-07-26 01:45:28.644887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.651 [2024-07-26 01:45:28.644916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.651 [2024-07-26 01:45:28.655774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.651 [2024-07-26 01:45:28.655802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.910 [2024-07-26 01:45:28.666613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.910 [2024-07-26 01:45:28.666641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.910 [2024-07-26 01:45:28.677054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.910 [2024-07-26 01:45:28.677094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.910 [2024-07-26 01:45:28.687962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.910 [2024-07-26 01:45:28.687990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.910 [2024-07-26 01:45:28.701005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.910 [2024-07-26 01:45:28.701033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.910 [2024-07-26 01:45:28.711417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.910 [2024-07-26 01:45:28.711446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.722245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.722273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.732901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.732929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.743574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.743601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.754715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.754743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.767528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.767556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.777134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.777179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.788472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.788499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.796511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.796538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 00:09:46.911 Latency(us) 00:09:46.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.911 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:46.911 Nvme1n1 : 5.01 11568.80 90.38 0.00 0.00 11049.55 4587.52 24855.13 00:09:46.911 =================================================================================================================== 00:09:46.911 Total : 11568.80 90.38 0.00 0.00 11049.55 4587.52 24855.13 00:09:46.911 [2024-07-26 01:45:28.803796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.803821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.811816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.811846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.819859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.819901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.827890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.827936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.835909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.835954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.843925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.843972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.851938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.851985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.859973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.860021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.868003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.868074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.876018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.876068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.884041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.884092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.892071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.892118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.900090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.900137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.908107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.908150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.911 [2024-07-26 01:45:28.916149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.911 [2024-07-26 01:45:28.916193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.171 [2024-07-26 01:45:28.924166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.172 [2024-07-26 01:45:28.924210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.172 [2024-07-26 01:45:28.932203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.172 [2024-07-26 01:45:28.932243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.172 [2024-07-26 01:45:28.940172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.172 [2024-07-26 01:45:28.940195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.172 [2024-07-26 01:45:28.948191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.172 [2024-07-26 01:45:28.948220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.172 [2024-07-26 01:45:28.956246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.172 [2024-07-26 01:45:28.956293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.172 [2024-07-26 01:45:28.964264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.172 [2024-07-26 01:45:28.964308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.172 [2024-07-26 01:45:28.972252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.172 [2024-07-26 01:45:28.972284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.172 [2024-07-26 01:45:28.980263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.172 [2024-07-26 01:45:28.980287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.172 [2024-07-26 01:45:28.988337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.172 [2024-07-26 01:45:28.988383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.172 [2024-07-26 01:45:28.996351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.172 [2024-07-26 01:45:28.996396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.172 [2024-07-26 01:45:29.004333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.172 [2024-07-26 01:45:29.004375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.172 [2024-07-26 01:45:29.012362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.172 [2024-07-26 01:45:29.012384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.172 [2024-07-26 01:45:29.020384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.172 [2024-07-26 01:45:29.020431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2182095) - No such process 00:09:47.172 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2182095 00:09:47.172 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.172 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.172 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.172 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.172 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:47.172 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.172 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.172 delay0 00:09:47.172 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.172 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:47.172 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.172 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.172 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.172 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:47.172 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.172 [2024-07-26 01:45:29.146344] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:55.295 Initializing NVMe Controllers 00:09:55.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:55.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:55.295 Initialization complete. Launching workers. 00:09:55.295 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 244, failed: 21407 00:09:55.295 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 21529, failed to submit 122 00:09:55.295 success 21434, unsuccess 95, failed 0 00:09:55.295 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:55.295 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:55.295 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:55.295 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:55.295 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:55.295 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:55.295 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:55.295 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:55.295 rmmod nvme_tcp 00:09:55.295 rmmod nvme_fabrics 00:09:55.295 rmmod nvme_keyring 00:09:55.295 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:55.295 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:55.295 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:55.295 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2180753 ']' 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2180753 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2180753 ']' 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2180753 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2180753 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2180753' 00:09:55.296 killing process with pid 2180753 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2180753 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2180753 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.296 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.670 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:56.670 00:09:56.670 real 0m28.782s 00:09:56.670 user 0m41.968s 00:09:56.670 sys 0m9.421s 00:09:56.670 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.670 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.670 ************************************ 00:09:56.670 END TEST nvmf_zcopy 00:09:56.670 ************************************ 00:09:56.670 01:45:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:56.670 01:45:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:56.670 01:45:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.670 01:45:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.928 ************************************ 00:09:56.928 START TEST nvmf_nmic 00:09:56.928 ************************************ 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:56.928 * Looking for test storage... 00:09:56.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:09:56.928 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.832 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:58.833 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:58.833 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:58.833 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:58.833 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:58.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:09:58.833 00:09:58.833 --- 10.0.0.2 ping statistics --- 00:09:58.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.833 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:09:58.833 00:09:58.833 --- 10.0.0.1 ping statistics --- 00:09:58.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.833 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:58.833 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:58.834 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.834 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:58.834 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:58.834 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:58.834 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:58.834 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:58.834 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.834 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2185489 00:09:58.834 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:58.834 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2185489 00:09:58.834 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2185489 ']' 00:09:58.834 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.834 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:58.834 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.834 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:58.834 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.834 [2024-07-26 01:45:40.834043] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:09:58.834 [2024-07-26 01:45:40.834142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.092 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.092 [2024-07-26 01:45:40.907069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.092 [2024-07-26 01:45:40.999105] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.092 [2024-07-26 01:45:40.999163] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.092 [2024-07-26 01:45:40.999180] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.092 [2024-07-26 01:45:40.999194] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.092 [2024-07-26 01:45:40.999206] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.092 [2024-07-26 01:45:40.999271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.092 [2024-07-26 01:45:40.999323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.092 [2024-07-26 01:45:40.999387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.092 [2024-07-26 01:45:40.999390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.352 [2024-07-26 01:45:41.144178] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.352 Malloc0 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.352 [2024-07-26 01:45:41.195239] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:59.352 test case1: single bdev can't be used in multiple subsystems 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.352 [2024-07-26 01:45:41.219089] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:59.352 [2024-07-26 01:45:41.219121] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:59.352 [2024-07-26 01:45:41.219137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.352 request: 00:09:59.352 { 00:09:59.352 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:59.352 "namespace": { 00:09:59.352 "bdev_name": "Malloc0", 00:09:59.352 "no_auto_visible": false 00:09:59.352 }, 00:09:59.352 "method": "nvmf_subsystem_add_ns", 00:09:59.352 "req_id": 1 00:09:59.352 } 00:09:59.352 Got JSON-RPC error response 00:09:59.352 response: 00:09:59.352 { 00:09:59.352 "code": -32602, 00:09:59.352 "message": "Invalid parameters" 00:09:59.352 } 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:59.352 Adding namespace failed - expected result. 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:59.352 test case2: host connect to nvmf target in multiple paths 00:09:59.352 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:59.353 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.353 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.353 [2024-07-26 01:45:41.227208] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:59.353 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.353 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:59.920 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:00.857 01:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.857 01:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:00.857 01:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.857 01:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:00.857 01:45:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:02.761 01:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:02.761 01:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:02.761 01:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:02.761 01:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:02.761 01:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:02.761 01:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:02.761 01:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:02.761 [global] 00:10:02.761 thread=1 00:10:02.761 invalidate=1 00:10:02.761 rw=write 00:10:02.761 time_based=1 00:10:02.761 runtime=1 00:10:02.761 ioengine=libaio 00:10:02.761 direct=1 00:10:02.761 bs=4096 00:10:02.761 iodepth=1 00:10:02.761 norandommap=0 00:10:02.761 numjobs=1 00:10:02.761 00:10:02.761 verify_dump=1 00:10:02.761 verify_backlog=512 00:10:02.761 verify_state_save=0 00:10:02.761 do_verify=1 00:10:02.761 verify=crc32c-intel 00:10:02.761 [job0] 00:10:02.761 filename=/dev/nvme0n1 00:10:02.761 Could not set queue depth (nvme0n1) 00:10:02.761 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.761 fio-3.35 00:10:02.761 Starting 1 thread 00:10:04.138 00:10:04.138 job0: (groupid=0, jobs=1): err= 0: pid=2186125: Fri Jul 26 01:45:45 2024 00:10:04.138 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:04.138 slat (nsec): min=5908, max=66154, avg=21119.49, stdev=10536.56 00:10:04.138 clat (usec): min=230, max=590, avg=333.58, stdev=64.78 00:10:04.138 lat (usec): min=240, max=601, avg=354.70, stdev=70.73 00:10:04.138 clat percentiles (usec): 00:10:04.138 | 1.00th=[ 241], 5.00th=[ 253], 10.00th=[ 265], 20.00th=[ 281], 00:10:04.138 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 334], 00:10:04.138 | 70.00th=[ 359], 80.00th=[ 388], 90.00th=[ 441], 95.00th=[ 461], 00:10:04.138 | 99.00th=[ 510], 99.50th=[ 537], 99.90th=[ 570], 99.95th=[ 594], 00:10:04.138 | 99.99th=[ 594] 00:10:04.138 write: IOPS=1927, BW=7708KiB/s (7893kB/s)(7716KiB/1001msec); 0 zone resets 00:10:04.138 slat (usec): min=7, max=31347, avg=30.44, stdev=713.44 00:10:04.138 clat (usec): min=151, max=359, avg=196.73, stdev=29.73 00:10:04.139 lat (usec): min=160, max=31602, avg=227.18, stdev=715.44 00:10:04.139 clat percentiles (usec): 00:10:04.139 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 176], 00:10:04.139 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:10:04.139 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 229], 95.00th=[ 249], 00:10:04.139 | 99.00th=[ 314], 99.50th=[ 326], 99.90th=[ 359], 99.95th=[ 359], 00:10:04.139 | 99.99th=[ 359] 00:10:04.139 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:04.139 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:04.139 lat (usec) : 250=54.78%, 500=44.59%, 750=0.63% 00:10:04.139 cpu : usr=3.10%, sys=6.40%, ctx=3468, majf=0, minf=2 00:10:04.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.139 issued rwts: total=1536,1929,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.139 00:10:04.139 Run status group 0 (all jobs): 00:10:04.139 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:10:04.139 WRITE: bw=7708KiB/s (7893kB/s), 7708KiB/s-7708KiB/s (7893kB/s-7893kB/s), io=7716KiB (7901kB), run=1001-1001msec 00:10:04.139 00:10:04.139 Disk stats (read/write): 00:10:04.139 nvme0n1: ios=1526/1536, merge=0/0, ticks=1423/294, in_queue=1717, util=98.60% 00:10:04.139 01:45:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:04.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:04.139 rmmod nvme_tcp 00:10:04.139 rmmod nvme_fabrics 00:10:04.139 rmmod nvme_keyring 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2185489 ']' 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2185489 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2185489 ']' 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2185489 00:10:04.139 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:04.397 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:04.397 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2185489 00:10:04.397 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:04.397 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:04.397 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2185489' 00:10:04.397 killing process with pid 2185489 00:10:04.397 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2185489 00:10:04.397 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2185489 00:10:04.658 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:04.658 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:04.658 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:04.658 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:04.658 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:04.658 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.658 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.658 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:06.561 00:10:06.561 real 0m9.774s 00:10:06.561 user 0m22.126s 00:10:06.561 sys 0m2.451s 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.561 ************************************ 00:10:06.561 END TEST nvmf_nmic 00:10:06.561 ************************************ 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.561 ************************************ 00:10:06.561 START TEST nvmf_fio_target 00:10:06.561 ************************************ 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:06.561 * Looking for test storage... 00:10:06.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:06.561 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.562 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.562 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.562 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.562 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.562 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.562 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.562 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.562 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.562 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.562 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.562 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:06.562 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.562 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:06.562 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:06.562 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:06.562 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:06.820 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:08.724 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:08.724 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:08.724 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:08.724 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:08.725 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:08.725 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.983 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.983 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.983 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:08.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:10:08.983 00:10:08.983 --- 10.0.0.2 ping statistics --- 00:10:08.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.983 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:10:08.983 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:10:08.983 00:10:08.983 --- 10.0.0.1 ping statistics --- 00:10:08.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.983 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:10:08.983 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.983 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:08.983 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:08.983 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.983 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:08.983 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:08.983 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.983 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:08.983 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:08.983 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:08.983 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:08.984 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:08.984 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.984 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2188205 00:10:08.984 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:08.984 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2188205 00:10:08.984 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2188205 ']' 00:10:08.984 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.984 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:08.984 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.984 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:08.984 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.984 [2024-07-26 01:45:50.862036] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:10:08.984 [2024-07-26 01:45:50.862148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.984 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.984 [2024-07-26 01:45:50.927118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.242 [2024-07-26 01:45:51.014700] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.242 [2024-07-26 01:45:51.014760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.242 [2024-07-26 01:45:51.014773] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.242 [2024-07-26 01:45:51.014784] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.242 [2024-07-26 01:45:51.014794] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.242 [2024-07-26 01:45:51.014889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.242 [2024-07-26 01:45:51.014951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.242 [2024-07-26 01:45:51.015017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.242 [2024-07-26 01:45:51.015019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.242 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.242 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:09.242 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:09.242 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:09.242 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.242 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.242 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:09.531 [2024-07-26 01:45:51.415288] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.531 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:09.789 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:09.789 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.047 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:10.047 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.305 01:45:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:10.305 01:45:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.563 01:45:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:10.563 01:45:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:10.821 01:45:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.079 01:45:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:11.079 01:45:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.337 01:45:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:11.337 01:45:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.594 01:45:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:11.594 01:45:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:11.851 01:45:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:12.109 01:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:12.109 01:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:12.367 01:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:12.367 01:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:12.625 01:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.883 [2024-07-26 01:45:54.789155] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.883 01:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:13.141 01:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:13.400 01:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:13.969 01:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:13.969 01:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:13.969 01:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:13.969 01:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:13.969 01:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:13.969 01:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:16.505 01:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:16.505 01:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:16.505 01:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.505 01:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:16.505 01:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.505 01:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:16.505 01:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:16.505 [global] 00:10:16.505 thread=1 00:10:16.505 invalidate=1 00:10:16.505 rw=write 00:10:16.505 time_based=1 00:10:16.505 runtime=1 00:10:16.505 ioengine=libaio 00:10:16.505 direct=1 00:10:16.505 bs=4096 00:10:16.506 iodepth=1 00:10:16.506 norandommap=0 00:10:16.506 numjobs=1 00:10:16.506 00:10:16.506 verify_dump=1 00:10:16.506 verify_backlog=512 00:10:16.506 verify_state_save=0 00:10:16.506 do_verify=1 00:10:16.506 verify=crc32c-intel 00:10:16.506 [job0] 00:10:16.506 filename=/dev/nvme0n1 00:10:16.506 [job1] 00:10:16.506 filename=/dev/nvme0n2 00:10:16.506 [job2] 00:10:16.506 filename=/dev/nvme0n3 00:10:16.506 [job3] 00:10:16.506 filename=/dev/nvme0n4 00:10:16.506 Could not set queue depth (nvme0n1) 00:10:16.506 Could not set queue depth (nvme0n2) 00:10:16.506 Could not set queue depth (nvme0n3) 00:10:16.506 Could not set queue depth (nvme0n4) 00:10:16.506 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.506 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.506 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.506 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:16.506 fio-3.35 00:10:16.506 Starting 4 threads 00:10:17.442 00:10:17.442 job0: (groupid=0, jobs=1): err= 0: pid=2189169: Fri Jul 26 01:45:59 2024 00:10:17.442 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:17.442 slat (nsec): min=5832, max=36295, avg=9364.12, stdev=5510.88 00:10:17.442 clat (usec): min=244, max=41986, avg=1456.89, stdev=6503.96 00:10:17.442 lat (usec): min=250, max=42001, avg=1466.26, stdev=6506.95 00:10:17.442 clat percentiles (usec): 00:10:17.442 | 1.00th=[ 260], 5.00th=[ 285], 10.00th=[ 306], 20.00th=[ 322], 00:10:17.442 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 355], 00:10:17.442 | 70.00th=[ 363], 80.00th=[ 383], 90.00th=[ 437], 95.00th=[ 457], 00:10:17.442 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:17.442 | 99.99th=[42206] 00:10:17.442 write: IOPS=1008, BW=4036KiB/s (4133kB/s)(4040KiB/1001msec); 0 zone resets 00:10:17.442 slat (nsec): min=7732, max=57533, avg=16976.57, stdev=7064.29 00:10:17.442 clat (usec): min=158, max=352, avg=224.16, stdev=35.61 00:10:17.442 lat (usec): min=168, max=365, avg=241.14, stdev=36.33 00:10:17.442 clat percentiles (usec): 00:10:17.442 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 192], 00:10:17.442 | 30.00th=[ 202], 40.00th=[ 212], 50.00th=[ 223], 60.00th=[ 233], 00:10:17.442 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 289], 00:10:17.442 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 343], 99.95th=[ 355], 00:10:17.442 | 99.99th=[ 355] 00:10:17.442 bw ( KiB/s): min= 4096, max= 4096, per=22.31%, avg=4096.00, stdev= 0.00, samples=1 00:10:17.442 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:17.442 lat (usec) : 250=51.77%, 500=47.17%, 750=0.07% 00:10:17.442 lat (msec) : 20=0.07%, 50=0.92% 00:10:17.442 cpu : usr=1.40%, sys=3.10%, ctx=1522, majf=0, minf=1 00:10:17.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.442 issued rwts: total=512,1010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.442 job1: (groupid=0, jobs=1): err= 0: pid=2189180: Fri Jul 26 01:45:59 2024 00:10:17.442 read: IOPS=525, BW=2102KiB/s (2152kB/s)(2104KiB/1001msec) 00:10:17.442 slat (nsec): min=5289, max=43937, avg=16472.07, stdev=5095.09 00:10:17.442 clat (usec): min=243, max=41995, avg=1429.81, stdev=6609.37 00:10:17.442 lat (usec): min=249, max=42010, avg=1446.29, stdev=6610.09 00:10:17.442 clat percentiles (usec): 00:10:17.442 | 1.00th=[ 269], 5.00th=[ 285], 10.00th=[ 306], 20.00th=[ 322], 00:10:17.442 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:10:17.442 | 70.00th=[ 343], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 494], 00:10:17.442 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:17.442 | 99.99th=[42206] 00:10:17.442 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:17.442 slat (nsec): min=6794, max=89634, avg=18351.46, stdev=6908.10 00:10:17.442 clat (usec): min=161, max=408, avg=207.92, stdev=21.28 00:10:17.442 lat (usec): min=170, max=432, avg=226.27, stdev=24.30 00:10:17.442 clat percentiles (usec): 00:10:17.442 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:10:17.442 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 208], 00:10:17.442 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 243], 00:10:17.442 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 371], 99.95th=[ 408], 00:10:17.442 | 99.99th=[ 408] 00:10:17.442 bw ( KiB/s): min= 8192, max= 8192, per=44.62%, avg=8192.00, stdev= 0.00, samples=1 00:10:17.442 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:17.442 lat (usec) : 250=63.55%, 500=35.10%, 750=0.45% 00:10:17.443 lat (msec) : 50=0.90% 00:10:17.443 cpu : usr=3.30%, sys=2.50%, ctx=1551, majf=0, minf=1 00:10:17.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.443 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.443 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.443 job2: (groupid=0, jobs=1): err= 0: pid=2189220: Fri Jul 26 01:45:59 2024 00:10:17.443 read: IOPS=830, BW=3321KiB/s (3400kB/s)(3324KiB/1001msec) 00:10:17.443 slat (nsec): min=6355, max=45240, avg=13900.64, stdev=6942.84 00:10:17.443 clat (usec): min=265, max=41305, avg=875.57, stdev=4348.07 00:10:17.443 lat (usec): min=271, max=41342, avg=889.47, stdev=4350.14 00:10:17.443 clat percentiles (usec): 00:10:17.443 | 1.00th=[ 281], 5.00th=[ 297], 10.00th=[ 318], 20.00th=[ 330], 00:10:17.443 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 367], 00:10:17.443 | 70.00th=[ 379], 80.00th=[ 404], 90.00th=[ 474], 95.00th=[ 515], 00:10:17.443 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:17.443 | 99.99th=[41157] 00:10:17.443 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:17.443 slat (nsec): min=7492, max=57092, avg=16319.47, stdev=8547.37 00:10:17.443 clat (usec): min=175, max=444, avg=230.25, stdev=32.10 00:10:17.443 lat (usec): min=185, max=469, avg=246.57, stdev=34.63 00:10:17.443 clat percentiles (usec): 00:10:17.443 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 204], 00:10:17.443 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 235], 00:10:17.443 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 281], 00:10:17.443 | 99.00th=[ 334], 99.50th=[ 388], 99.90th=[ 416], 99.95th=[ 445], 00:10:17.443 | 99.99th=[ 445] 00:10:17.443 bw ( KiB/s): min= 4096, max= 4096, per=22.31%, avg=4096.00, stdev= 0.00, samples=1 00:10:17.443 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:17.443 lat (usec) : 250=42.37%, 500=54.99%, 750=1.99% 00:10:17.443 lat (msec) : 20=0.05%, 50=0.59% 00:10:17.443 cpu : usr=1.90%, sys=3.30%, ctx=1857, majf=0, minf=2 00:10:17.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.443 issued rwts: total=831,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.443 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.443 job3: (groupid=0, jobs=1): err= 0: pid=2189233: Fri Jul 26 01:45:59 2024 00:10:17.443 read: IOPS=1111, BW=4448KiB/s (4554kB/s)(4452KiB/1001msec) 00:10:17.443 slat (nsec): min=6066, max=45090, avg=12798.96, stdev=6069.56 00:10:17.443 clat (usec): min=245, max=41006, avg=526.14, stdev=2713.07 00:10:17.443 lat (usec): min=252, max=41019, avg=538.94, stdev=2714.13 00:10:17.443 clat percentiles (usec): 00:10:17.443 | 1.00th=[ 255], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 277], 00:10:17.443 | 30.00th=[ 297], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 347], 00:10:17.443 | 70.00th=[ 367], 80.00th=[ 420], 90.00th=[ 445], 95.00th=[ 478], 00:10:17.443 | 99.00th=[ 537], 99.50th=[ 635], 99.90th=[41157], 99.95th=[41157], 00:10:17.443 | 99.99th=[41157] 00:10:17.443 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:17.443 slat (nsec): min=7700, max=61043, avg=16408.88, stdev=7728.49 00:10:17.443 clat (usec): min=158, max=1131, avg=237.02, stdev=54.19 00:10:17.443 lat (usec): min=166, max=1142, avg=253.43, stdev=56.84 00:10:17.443 clat percentiles (usec): 00:10:17.443 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 186], 20.00th=[ 202], 00:10:17.443 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 241], 00:10:17.443 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 314], 00:10:17.443 | 99.00th=[ 400], 99.50th=[ 437], 99.90th=[ 938], 99.95th=[ 1139], 00:10:17.443 | 99.99th=[ 1139] 00:10:17.443 bw ( KiB/s): min= 4096, max= 4096, per=22.31%, avg=4096.00, stdev= 0.00, samples=1 00:10:17.443 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:17.443 lat (usec) : 250=40.24%, 500=58.81%, 750=0.64%, 1000=0.08% 00:10:17.443 lat (msec) : 2=0.04%, 50=0.19% 00:10:17.443 cpu : usr=3.70%, sys=4.40%, ctx=2653, majf=0, minf=1 00:10:17.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.443 issued rwts: total=1113,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.443 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.443 00:10:17.443 Run status group 0 (all jobs): 00:10:17.443 READ: bw=11.6MiB/s (12.2MB/s), 2046KiB/s-4448KiB/s (2095kB/s-4554kB/s), io=11.6MiB (12.2MB), run=1001-1001msec 00:10:17.443 WRITE: bw=17.9MiB/s (18.8MB/s), 4036KiB/s-6138KiB/s (4133kB/s-6285kB/s), io=17.9MiB (18.8MB), run=1001-1001msec 00:10:17.443 00:10:17.443 Disk stats (read/write): 00:10:17.443 nvme0n1: ios=422/512, merge=0/0, ticks=721/118, in_queue=839, util=85.97% 00:10:17.443 nvme0n2: ios=537/1024, merge=0/0, ticks=594/199, in_queue=793, util=86.03% 00:10:17.443 nvme0n3: ios=569/914, merge=0/0, ticks=1288/196, in_queue=1484, util=97.15% 00:10:17.443 nvme0n4: ios=1081/1042, merge=0/0, ticks=1329/227, in_queue=1556, util=97.23% 00:10:17.443 01:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:17.443 [global] 00:10:17.443 thread=1 00:10:17.443 invalidate=1 00:10:17.443 rw=randwrite 00:10:17.443 time_based=1 00:10:17.443 runtime=1 00:10:17.443 ioengine=libaio 00:10:17.443 direct=1 00:10:17.443 bs=4096 00:10:17.443 iodepth=1 00:10:17.443 norandommap=0 00:10:17.443 numjobs=1 00:10:17.443 00:10:17.443 verify_dump=1 00:10:17.443 verify_backlog=512 00:10:17.443 verify_state_save=0 00:10:17.443 do_verify=1 00:10:17.443 verify=crc32c-intel 00:10:17.443 [job0] 00:10:17.443 filename=/dev/nvme0n1 00:10:17.443 [job1] 00:10:17.443 filename=/dev/nvme0n2 00:10:17.443 [job2] 00:10:17.443 filename=/dev/nvme0n3 00:10:17.443 [job3] 00:10:17.443 filename=/dev/nvme0n4 00:10:17.443 Could not set queue depth (nvme0n1) 00:10:17.443 Could not set queue depth (nvme0n2) 00:10:17.443 Could not set queue depth (nvme0n3) 00:10:17.443 Could not set queue depth (nvme0n4) 00:10:17.703 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.703 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.703 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.703 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.703 fio-3.35 00:10:17.703 Starting 4 threads 00:10:19.079 00:10:19.079 job0: (groupid=0, jobs=1): err= 0: pid=2189514: Fri Jul 26 01:46:00 2024 00:10:19.079 read: IOPS=24, BW=99.2KiB/s (102kB/s)(100KiB/1008msec) 00:10:19.079 slat (nsec): min=10065, max=33940, avg=18398.00, stdev=7147.49 00:10:19.079 clat (usec): min=314, max=42008, avg=33179.26, stdev=16759.41 00:10:19.079 lat (usec): min=325, max=42026, avg=33197.66, stdev=16762.47 00:10:19.079 clat percentiles (usec): 00:10:19.079 | 1.00th=[ 314], 5.00th=[ 334], 10.00th=[ 355], 20.00th=[ 383], 00:10:19.079 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:19.079 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:19.079 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:19.079 | 99.99th=[42206] 00:10:19.079 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:10:19.079 slat (nsec): min=8538, max=76086, avg=25616.58, stdev=10907.07 00:10:19.079 clat (usec): min=179, max=500, avg=314.21, stdev=76.14 00:10:19.079 lat (usec): min=187, max=538, avg=339.83, stdev=72.05 00:10:19.079 clat percentiles (usec): 00:10:19.079 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 219], 00:10:19.079 | 30.00th=[ 249], 40.00th=[ 314], 50.00th=[ 334], 60.00th=[ 347], 00:10:19.079 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 404], 95.00th=[ 420], 00:10:19.079 | 99.00th=[ 453], 99.50th=[ 478], 99.90th=[ 502], 99.95th=[ 502], 00:10:19.079 | 99.99th=[ 502] 00:10:19.079 bw ( KiB/s): min= 4096, max= 4096, per=29.34%, avg=4096.00, stdev= 0.00, samples=1 00:10:19.079 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:19.079 lat (usec) : 250=28.68%, 500=67.41%, 750=0.19% 00:10:19.079 lat (msec) : 50=3.72% 00:10:19.079 cpu : usr=0.60%, sys=1.39%, ctx=537, majf=0, minf=1 00:10:19.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.079 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.079 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.079 job1: (groupid=0, jobs=1): err= 0: pid=2189515: Fri Jul 26 01:46:00 2024 00:10:19.079 read: IOPS=505, BW=2021KiB/s (2070kB/s)(2076KiB/1027msec) 00:10:19.079 slat (nsec): min=6750, max=53510, avg=8355.21, stdev=4081.83 00:10:19.079 clat (usec): min=243, max=42368, avg=1324.05, stdev=6465.92 00:10:19.079 lat (usec): min=250, max=42376, avg=1332.41, stdev=6467.77 00:10:19.079 clat percentiles (usec): 00:10:19.079 | 1.00th=[ 249], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 265], 00:10:19.079 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 293], 00:10:19.079 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 338], 95.00th=[ 383], 00:10:19.079 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:19.079 | 99.99th=[42206] 00:10:19.079 write: IOPS=997, BW=3988KiB/s (4084kB/s)(4096KiB/1027msec); 0 zone resets 00:10:19.079 slat (nsec): min=8679, max=72550, avg=19452.86, stdev=11025.01 00:10:19.079 clat (usec): min=152, max=554, avg=300.77, stdev=93.69 00:10:19.079 lat (usec): min=161, max=598, avg=320.22, stdev=100.22 00:10:19.079 clat percentiles (usec): 00:10:19.079 | 1.00th=[ 184], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 223], 00:10:19.079 | 30.00th=[ 231], 40.00th=[ 245], 50.00th=[ 258], 60.00th=[ 289], 00:10:19.079 | 70.00th=[ 351], 80.00th=[ 400], 90.00th=[ 457], 95.00th=[ 478], 00:10:19.079 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 545], 99.95th=[ 553], 00:10:19.079 | 99.99th=[ 553] 00:10:19.079 bw ( KiB/s): min= 4096, max= 4096, per=29.34%, avg=4096.00, stdev= 0.00, samples=2 00:10:19.079 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:19.079 lat (usec) : 250=30.01%, 500=67.53%, 750=1.62% 00:10:19.079 lat (msec) : 50=0.84% 00:10:19.079 cpu : usr=1.95%, sys=2.83%, ctx=1544, majf=0, minf=1 00:10:19.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.079 issued rwts: total=519,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.079 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.079 job2: (groupid=0, jobs=1): err= 0: pid=2189516: Fri Jul 26 01:46:00 2024 00:10:19.079 read: IOPS=1021, BW=4088KiB/s (4186kB/s)(4108KiB/1005msec) 00:10:19.079 slat (nsec): min=6778, max=61897, avg=10565.74, stdev=5953.32 00:10:19.079 clat (usec): min=231, max=41886, avg=566.14, stdev=3353.00 00:10:19.079 lat (usec): min=239, max=41920, avg=576.70, stdev=3354.02 00:10:19.079 clat percentiles (usec): 00:10:19.079 | 1.00th=[ 239], 5.00th=[ 245], 10.00th=[ 247], 20.00th=[ 251], 00:10:19.079 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:10:19.079 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 355], 95.00th=[ 420], 00:10:19.079 | 99.00th=[ 660], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:10:19.079 | 99.99th=[41681] 00:10:19.079 write: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec); 0 zone resets 00:10:19.079 slat (nsec): min=8472, max=69329, avg=18302.56, stdev=10461.67 00:10:19.079 clat (usec): min=165, max=551, avg=242.60, stdev=86.25 00:10:19.080 lat (usec): min=174, max=608, avg=260.90, stdev=92.26 00:10:19.080 clat percentiles (usec): 00:10:19.080 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:10:19.080 | 30.00th=[ 188], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 212], 00:10:19.080 | 70.00th=[ 249], 80.00th=[ 297], 90.00th=[ 388], 95.00th=[ 441], 00:10:19.080 | 99.00th=[ 498], 99.50th=[ 523], 99.90th=[ 545], 99.95th=[ 553], 00:10:19.080 | 99.99th=[ 553] 00:10:19.080 bw ( KiB/s): min= 4096, max= 8192, per=44.01%, avg=6144.00, stdev=2896.31, samples=2 00:10:19.080 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:19.080 lat (usec) : 250=48.97%, 500=49.51%, 750=1.17% 00:10:19.080 lat (msec) : 4=0.08%, 50=0.27% 00:10:19.080 cpu : usr=3.29%, sys=4.78%, ctx=2563, majf=0, minf=1 00:10:19.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.080 issued rwts: total=1027,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.080 job3: (groupid=0, jobs=1): err= 0: pid=2189517: Fri Jul 26 01:46:00 2024 00:10:19.080 read: IOPS=454, BW=1818KiB/s (1862kB/s)(1820KiB/1001msec) 00:10:19.080 slat (nsec): min=6912, max=35944, avg=17177.64, stdev=3936.01 00:10:19.080 clat (usec): min=267, max=42051, avg=1927.36, stdev=8009.63 00:10:19.080 lat (usec): min=282, max=42075, avg=1944.54, stdev=8009.41 00:10:19.080 clat percentiles (usec): 00:10:19.080 | 1.00th=[ 269], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:10:19.080 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 297], 00:10:19.080 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 355], 95.00th=[ 441], 00:10:19.080 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:19.080 | 99.99th=[42206] 00:10:19.080 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:19.080 slat (nsec): min=7133, max=59757, avg=15073.13, stdev=6500.44 00:10:19.080 clat (usec): min=171, max=365, avg=201.23, stdev=21.39 00:10:19.080 lat (usec): min=179, max=412, avg=216.30, stdev=22.70 00:10:19.080 clat percentiles (usec): 00:10:19.080 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:10:19.080 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 200], 00:10:19.080 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 223], 95.00th=[ 241], 00:10:19.080 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 367], 99.95th=[ 367], 00:10:19.080 | 99.99th=[ 367] 00:10:19.080 bw ( KiB/s): min= 4096, max= 4096, per=29.34%, avg=4096.00, stdev= 0.00, samples=1 00:10:19.080 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:19.080 lat (usec) : 250=51.19%, 500=46.85% 00:10:19.080 lat (msec) : 4=0.10%, 50=1.86% 00:10:19.080 cpu : usr=1.60%, sys=1.20%, ctx=967, majf=0, minf=2 00:10:19.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.080 issued rwts: total=455,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.080 00:10:19.080 Run status group 0 (all jobs): 00:10:19.080 READ: bw=7891KiB/s (8080kB/s), 99.2KiB/s-4088KiB/s (102kB/s-4186kB/s), io=8104KiB (8298kB), run=1001-1027msec 00:10:19.080 WRITE: bw=13.6MiB/s (14.3MB/s), 2032KiB/s-6113KiB/s (2081kB/s-6260kB/s), io=14.0MiB (14.7MB), run=1001-1027msec 00:10:19.080 00:10:19.080 Disk stats (read/write): 00:10:19.080 nvme0n1: ios=71/512, merge=0/0, ticks=694/159, in_queue=853, util=87.07% 00:10:19.080 nvme0n2: ios=554/1024, merge=0/0, ticks=1489/268, in_queue=1757, util=97.36% 00:10:19.080 nvme0n3: ios=1024/1408, merge=0/0, ticks=437/321, in_queue=758, util=89.03% 00:10:19.080 nvme0n4: ios=69/512, merge=0/0, ticks=723/96, in_queue=819, util=89.68% 00:10:19.080 01:46:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:19.080 [global] 00:10:19.080 thread=1 00:10:19.080 invalidate=1 00:10:19.080 rw=write 00:10:19.080 time_based=1 00:10:19.080 runtime=1 00:10:19.080 ioengine=libaio 00:10:19.080 direct=1 00:10:19.080 bs=4096 00:10:19.080 iodepth=128 00:10:19.080 norandommap=0 00:10:19.080 numjobs=1 00:10:19.080 00:10:19.080 verify_dump=1 00:10:19.080 verify_backlog=512 00:10:19.080 verify_state_save=0 00:10:19.080 do_verify=1 00:10:19.080 verify=crc32c-intel 00:10:19.080 [job0] 00:10:19.080 filename=/dev/nvme0n1 00:10:19.080 [job1] 00:10:19.080 filename=/dev/nvme0n2 00:10:19.080 [job2] 00:10:19.080 filename=/dev/nvme0n3 00:10:19.080 [job3] 00:10:19.080 filename=/dev/nvme0n4 00:10:19.080 Could not set queue depth (nvme0n1) 00:10:19.080 Could not set queue depth (nvme0n2) 00:10:19.080 Could not set queue depth (nvme0n3) 00:10:19.080 Could not set queue depth (nvme0n4) 00:10:19.080 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:19.080 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:19.080 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:19.080 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:19.080 fio-3.35 00:10:19.080 Starting 4 threads 00:10:20.455 00:10:20.455 job0: (groupid=0, jobs=1): err= 0: pid=2189743: Fri Jul 26 01:46:02 2024 00:10:20.455 read: IOPS=2311, BW=9244KiB/s (9466kB/s)(9272KiB/1003msec) 00:10:20.455 slat (usec): min=2, max=26964, avg=219.18, stdev=1562.01 00:10:20.455 clat (usec): min=453, max=107518, avg=27976.90, stdev=17853.90 00:10:20.455 lat (msec): min=3, max=107, avg=28.20, stdev=17.99 00:10:20.455 clat percentiles (msec): 00:10:20.455 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 13], 20.00th=[ 18], 00:10:20.455 | 30.00th=[ 18], 40.00th=[ 19], 50.00th=[ 20], 60.00th=[ 23], 00:10:20.455 | 70.00th=[ 32], 80.00th=[ 42], 90.00th=[ 54], 95.00th=[ 66], 00:10:20.455 | 99.00th=[ 81], 99.50th=[ 93], 99.90th=[ 93], 99.95th=[ 93], 00:10:20.455 | 99.99th=[ 108] 00:10:20.455 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:10:20.455 slat (usec): min=4, max=19048, avg=179.59, stdev=1016.03 00:10:20.455 clat (usec): min=1357, max=67340, avg=23351.80, stdev=13862.87 00:10:20.455 lat (usec): min=1364, max=67348, avg=23531.39, stdev=13931.76 00:10:20.455 clat percentiles (usec): 00:10:20.455 | 1.00th=[ 2180], 5.00th=[ 7373], 10.00th=[ 9896], 20.00th=[10683], 00:10:20.455 | 30.00th=[14746], 40.00th=[19006], 50.00th=[22152], 60.00th=[23200], 00:10:20.455 | 70.00th=[23725], 80.00th=[33424], 90.00th=[44303], 95.00th=[53216], 00:10:20.455 | 99.00th=[65799], 99.50th=[65799], 99.90th=[65799], 99.95th=[67634], 00:10:20.455 | 99.99th=[67634] 00:10:20.455 bw ( KiB/s): min= 8856, max=11624, per=15.73%, avg=10240.00, stdev=1957.27, samples=2 00:10:20.455 iops : min= 2214, max= 2906, avg=2560.00, stdev=489.32, samples=2 00:10:20.455 lat (usec) : 500=0.02% 00:10:20.455 lat (msec) : 2=0.39%, 4=1.31%, 10=6.07%, 20=39.52%, 50=43.60% 00:10:20.455 lat (msec) : 100=9.06%, 250=0.02% 00:10:20.455 cpu : usr=2.20%, sys=3.79%, ctx=315, majf=0, minf=1 00:10:20.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:10:20.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.455 issued rwts: total=2318,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.455 job1: (groupid=0, jobs=1): err= 0: pid=2189744: Fri Jul 26 01:46:02 2024 00:10:20.455 read: IOPS=4915, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1007msec) 00:10:20.455 slat (usec): min=3, max=13608, avg=98.45, stdev=598.25 00:10:20.455 clat (usec): min=2772, max=43487, avg=12716.93, stdev=5070.75 00:10:20.455 lat (usec): min=6270, max=43496, avg=12815.38, stdev=5103.39 00:10:20.455 clat percentiles (usec): 00:10:20.455 | 1.00th=[ 7635], 5.00th=[ 8979], 10.00th=[10421], 20.00th=[10814], 00:10:20.455 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:10:20.455 | 70.00th=[12125], 80.00th=[13173], 90.00th=[15270], 95.00th=[20317], 00:10:20.455 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:20.455 | 99.99th=[43254] 00:10:20.455 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:10:20.455 slat (usec): min=3, max=25555, avg=90.98, stdev=634.51 00:10:20.455 clat (usec): min=900, max=29298, avg=12097.86, stdev=3237.11 00:10:20.455 lat (usec): min=907, max=40082, avg=12188.84, stdev=3263.11 00:10:20.455 clat percentiles (usec): 00:10:20.455 | 1.00th=[ 5211], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[10683], 00:10:20.455 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:10:20.455 | 70.00th=[11994], 80.00th=[12387], 90.00th=[14222], 95.00th=[16450], 00:10:20.455 | 99.00th=[28443], 99.50th=[29230], 99.90th=[29230], 99.95th=[29230], 00:10:20.455 | 99.99th=[29230] 00:10:20.455 bw ( KiB/s): min=20480, max=20521, per=31.50%, avg=20500.50, stdev=28.99, samples=2 00:10:20.455 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:10:20.455 lat (usec) : 1000=0.03% 00:10:20.455 lat (msec) : 4=0.14%, 10=8.38%, 20=87.69%, 50=3.76% 00:10:20.455 cpu : usr=6.26%, sys=9.24%, ctx=552, majf=0, minf=1 00:10:20.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:20.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.455 issued rwts: total=4950,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.455 job2: (groupid=0, jobs=1): err= 0: pid=2189745: Fri Jul 26 01:46:02 2024 00:10:20.455 read: IOPS=3447, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1004msec) 00:10:20.455 slat (usec): min=2, max=15409, avg=136.30, stdev=906.53 00:10:20.455 clat (usec): min=3543, max=40777, avg=16652.63, stdev=6290.72 00:10:20.455 lat (usec): min=3558, max=40785, avg=16788.93, stdev=6340.64 00:10:20.455 clat percentiles (usec): 00:10:20.455 | 1.00th=[ 5932], 5.00th=[10028], 10.00th=[11338], 20.00th=[12911], 00:10:20.455 | 30.00th=[13435], 40.00th=[13960], 50.00th=[15139], 60.00th=[15401], 00:10:20.455 | 70.00th=[16188], 80.00th=[20317], 90.00th=[27395], 95.00th=[30540], 00:10:20.455 | 99.00th=[37487], 99.50th=[39584], 99.90th=[40633], 99.95th=[40633], 00:10:20.455 | 99.99th=[40633] 00:10:20.455 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:10:20.455 slat (usec): min=4, max=13306, avg=128.03, stdev=641.02 00:10:20.455 clat (usec): min=841, max=54797, avg=19433.26, stdev=8496.25 00:10:20.455 lat (usec): min=867, max=55480, avg=19561.30, stdev=8549.74 00:10:20.455 clat percentiles (usec): 00:10:20.455 | 1.00th=[ 4293], 5.00th=[ 7439], 10.00th=[ 8356], 20.00th=[11207], 00:10:20.455 | 30.00th=[13566], 40.00th=[16188], 50.00th=[20579], 60.00th=[22938], 00:10:20.456 | 70.00th=[23725], 80.00th=[26084], 90.00th=[28967], 95.00th=[30016], 00:10:20.456 | 99.00th=[50594], 99.50th=[51643], 99.90th=[54789], 99.95th=[54789], 00:10:20.456 | 99.99th=[54789] 00:10:20.456 bw ( KiB/s): min=12304, max=16368, per=22.03%, avg=14336.00, stdev=2873.68, samples=2 00:10:20.456 iops : min= 3076, max= 4092, avg=3584.00, stdev=718.42, samples=2 00:10:20.456 lat (usec) : 1000=0.01% 00:10:20.456 lat (msec) : 2=0.09%, 4=0.44%, 10=9.30%, 20=53.50%, 50=36.00% 00:10:20.456 lat (msec) : 100=0.67% 00:10:20.456 cpu : usr=4.39%, sys=6.88%, ctx=384, majf=0, minf=1 00:10:20.456 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:20.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.456 issued rwts: total=3461,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.456 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.456 job3: (groupid=0, jobs=1): err= 0: pid=2189746: Fri Jul 26 01:46:02 2024 00:10:20.456 read: IOPS=4956, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1005msec) 00:10:20.456 slat (usec): min=2, max=5911, avg=98.54, stdev=553.74 00:10:20.456 clat (usec): min=3805, max=24951, avg=12498.54, stdev=2034.92 00:10:20.456 lat (usec): min=6381, max=24968, avg=12597.07, stdev=2079.43 00:10:20.456 clat percentiles (usec): 00:10:20.456 | 1.00th=[ 8160], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[11731], 00:10:20.456 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:10:20.456 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14615], 95.00th=[16450], 00:10:20.456 | 99.00th=[19006], 99.50th=[20579], 99.90th=[25035], 99.95th=[25035], 00:10:20.456 | 99.99th=[25035] 00:10:20.456 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:10:20.456 slat (usec): min=3, max=7673, avg=89.66, stdev=387.94 00:10:20.456 clat (usec): min=3721, max=41885, avg=12663.41, stdev=3211.96 00:10:20.456 lat (usec): min=4353, max=41899, avg=12753.07, stdev=3217.16 00:10:20.456 clat percentiles (usec): 00:10:20.456 | 1.00th=[ 6390], 5.00th=[ 8356], 10.00th=[10159], 20.00th=[11600], 00:10:20.456 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:10:20.456 | 70.00th=[12780], 80.00th=[13304], 90.00th=[15008], 95.00th=[16909], 00:10:20.456 | 99.00th=[24249], 99.50th=[33424], 99.90th=[41681], 99.95th=[41681], 00:10:20.456 | 99.99th=[41681] 00:10:20.456 bw ( KiB/s): min=20480, max=20480, per=31.47%, avg=20480.00, stdev= 0.00, samples=2 00:10:20.456 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:20.456 lat (msec) : 4=0.02%, 10=9.06%, 20=88.99%, 50=1.93% 00:10:20.456 cpu : usr=5.98%, sys=10.16%, ctx=627, majf=0, minf=1 00:10:20.456 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:20.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.456 issued rwts: total=4981,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.456 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.456 00:10:20.456 Run status group 0 (all jobs): 00:10:20.456 READ: bw=60.9MiB/s (63.9MB/s), 9244KiB/s-19.4MiB/s (9466kB/s-20.3MB/s), io=61.4MiB (64.3MB), run=1003-1007msec 00:10:20.456 WRITE: bw=63.6MiB/s (66.6MB/s), 9.97MiB/s-19.9MiB/s (10.5MB/s-20.9MB/s), io=64.0MiB (67.1MB), run=1003-1007msec 00:10:20.456 00:10:20.456 Disk stats (read/write): 00:10:20.456 nvme0n1: ios=1923/2048, merge=0/0, ticks=21307/18580, in_queue=39887, util=93.79% 00:10:20.456 nvme0n2: ios=4183/4608, merge=0/0, ticks=24833/28676, in_queue=53509, util=98.17% 00:10:20.456 nvme0n3: ios=2702/3072, merge=0/0, ticks=45713/56488, in_queue=102201, util=96.77% 00:10:20.456 nvme0n4: ios=4205/4608, merge=0/0, ticks=25303/27492, in_queue=52795, util=96.74% 00:10:20.456 01:46:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:20.456 [global] 00:10:20.456 thread=1 00:10:20.456 invalidate=1 00:10:20.456 rw=randwrite 00:10:20.456 time_based=1 00:10:20.456 runtime=1 00:10:20.456 ioengine=libaio 00:10:20.456 direct=1 00:10:20.456 bs=4096 00:10:20.456 iodepth=128 00:10:20.456 norandommap=0 00:10:20.456 numjobs=1 00:10:20.456 00:10:20.456 verify_dump=1 00:10:20.456 verify_backlog=512 00:10:20.456 verify_state_save=0 00:10:20.456 do_verify=1 00:10:20.456 verify=crc32c-intel 00:10:20.456 [job0] 00:10:20.456 filename=/dev/nvme0n1 00:10:20.456 [job1] 00:10:20.456 filename=/dev/nvme0n2 00:10:20.456 [job2] 00:10:20.456 filename=/dev/nvme0n3 00:10:20.456 [job3] 00:10:20.456 filename=/dev/nvme0n4 00:10:20.456 Could not set queue depth (nvme0n1) 00:10:20.456 Could not set queue depth (nvme0n2) 00:10:20.456 Could not set queue depth (nvme0n3) 00:10:20.456 Could not set queue depth (nvme0n4) 00:10:20.714 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.714 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.714 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.714 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.714 fio-3.35 00:10:20.714 Starting 4 threads 00:10:22.090 00:10:22.090 job0: (groupid=0, jobs=1): err= 0: pid=2189981: Fri Jul 26 01:46:03 2024 00:10:22.090 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1006msec) 00:10:22.090 slat (usec): min=3, max=24048, avg=135.67, stdev=1067.90 00:10:22.090 clat (usec): min=5026, max=97412, avg=18819.99, stdev=12727.83 00:10:22.090 lat (usec): min=5789, max=99974, avg=18955.66, stdev=12821.36 00:10:22.090 clat percentiles (usec): 00:10:22.090 | 1.00th=[ 8455], 5.00th=[ 9765], 10.00th=[11207], 20.00th=[11994], 00:10:22.090 | 30.00th=[12780], 40.00th=[13829], 50.00th=[14353], 60.00th=[14615], 00:10:22.090 | 70.00th=[15795], 80.00th=[26084], 90.00th=[31065], 95.00th=[50594], 00:10:22.090 | 99.00th=[69731], 99.50th=[69731], 99.90th=[86508], 99.95th=[87557], 00:10:22.090 | 99.99th=[96994] 00:10:22.090 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:10:22.090 slat (usec): min=3, max=25521, avg=145.98, stdev=1007.14 00:10:22.090 clat (usec): min=467, max=73100, avg=18403.99, stdev=14328.23 00:10:22.090 lat (usec): min=480, max=73112, avg=18549.96, stdev=14442.07 00:10:22.090 clat percentiles (usec): 00:10:22.090 | 1.00th=[ 4113], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10814], 00:10:22.090 | 30.00th=[11731], 40.00th=[12649], 50.00th=[13173], 60.00th=[13435], 00:10:22.090 | 70.00th=[14091], 80.00th=[21890], 90.00th=[41681], 95.00th=[54264], 00:10:22.090 | 99.00th=[68682], 99.50th=[68682], 99.90th=[72877], 99.95th=[72877], 00:10:22.090 | 99.99th=[72877] 00:10:22.090 bw ( KiB/s): min=12672, max=15096, per=20.12%, avg=13884.00, stdev=1714.03, samples=2 00:10:22.090 iops : min= 3168, max= 3774, avg=3471.00, stdev=428.51, samples=2 00:10:22.090 lat (usec) : 500=0.01% 00:10:22.090 lat (msec) : 10=9.95%, 20=67.14%, 50=16.10%, 100=6.79% 00:10:22.090 cpu : usr=3.08%, sys=4.78%, ctx=352, majf=0, minf=1 00:10:22.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:22.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.090 issued rwts: total=3087,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.090 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.090 job1: (groupid=0, jobs=1): err= 0: pid=2189982: Fri Jul 26 01:46:03 2024 00:10:22.090 read: IOPS=4675, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1009msec) 00:10:22.090 slat (usec): min=2, max=26119, avg=100.98, stdev=669.04 00:10:22.090 clat (usec): min=5835, max=48658, avg=12546.86, stdev=4440.72 00:10:22.090 lat (usec): min=5866, max=48666, avg=12647.84, stdev=4483.27 00:10:22.090 clat percentiles (usec): 00:10:22.090 | 1.00th=[ 7177], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10683], 00:10:22.090 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11731], 60.00th=[12256], 00:10:22.090 | 70.00th=[12911], 80.00th=[13960], 90.00th=[15795], 95.00th=[16319], 00:10:22.090 | 99.00th=[44303], 99.50th=[46400], 99.90th=[48497], 99.95th=[48497], 00:10:22.090 | 99.99th=[48497] 00:10:22.090 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:10:22.090 slat (usec): min=4, max=11250, avg=92.29, stdev=453.59 00:10:22.090 clat (usec): min=6061, max=54454, avg=13414.07, stdev=7099.81 00:10:22.090 lat (usec): min=6077, max=54595, avg=13506.36, stdev=7136.53 00:10:22.090 clat percentiles (usec): 00:10:22.090 | 1.00th=[ 7308], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10290], 00:10:22.090 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[11994], 00:10:22.090 | 70.00th=[12256], 80.00th=[13698], 90.00th=[17957], 95.00th=[25822], 00:10:22.090 | 99.00th=[50070], 99.50th=[52167], 99.90th=[54264], 99.95th=[54264], 00:10:22.090 | 99.99th=[54264] 00:10:22.090 bw ( KiB/s): min=17312, max=23504, per=29.57%, avg=20408.00, stdev=4378.41, samples=2 00:10:22.090 iops : min= 4328, max= 5876, avg=5102.00, stdev=1094.60, samples=2 00:10:22.090 lat (msec) : 10=13.12%, 20=81.38%, 50=4.87%, 100=0.63% 00:10:22.090 cpu : usr=8.43%, sys=8.53%, ctx=578, majf=0, minf=1 00:10:22.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:22.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.090 issued rwts: total=4718,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.090 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.090 job2: (groupid=0, jobs=1): err= 0: pid=2189983: Fri Jul 26 01:46:03 2024 00:10:22.090 read: IOPS=4301, BW=16.8MiB/s (17.6MB/s)(17.0MiB/1009msec) 00:10:22.090 slat (usec): min=2, max=28026, avg=105.78, stdev=806.08 00:10:22.090 clat (usec): min=2963, max=63214, avg=14673.36, stdev=6391.82 00:10:22.090 lat (usec): min=2987, max=63230, avg=14779.14, stdev=6424.14 00:10:22.090 clat percentiles (usec): 00:10:22.090 | 1.00th=[ 4228], 5.00th=[ 9372], 10.00th=[11076], 20.00th=[11731], 00:10:22.090 | 30.00th=[12256], 40.00th=[13173], 50.00th=[13435], 60.00th=[13960], 00:10:22.090 | 70.00th=[14615], 80.00th=[15401], 90.00th=[18220], 95.00th=[25035], 00:10:22.090 | 99.00th=[42206], 99.50th=[60556], 99.90th=[60556], 99.95th=[60556], 00:10:22.090 | 99.99th=[63177] 00:10:22.090 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:10:22.090 slat (usec): min=3, max=22580, avg=101.50, stdev=841.71 00:10:22.090 clat (usec): min=774, max=48587, avg=13863.04, stdev=6275.20 00:10:22.090 lat (usec): min=787, max=48602, avg=13964.55, stdev=6318.13 00:10:22.090 clat percentiles (usec): 00:10:22.090 | 1.00th=[ 4752], 5.00th=[ 5669], 10.00th=[ 6587], 20.00th=[10552], 00:10:22.090 | 30.00th=[11469], 40.00th=[13042], 50.00th=[13566], 60.00th=[13960], 00:10:22.090 | 70.00th=[14615], 80.00th=[15270], 90.00th=[17957], 95.00th=[27395], 00:10:22.090 | 99.00th=[35390], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:10:22.090 | 99.99th=[48497] 00:10:22.091 bw ( KiB/s): min=17584, max=19280, per=26.71%, avg=18432.00, stdev=1199.25, samples=2 00:10:22.091 iops : min= 4396, max= 4820, avg=4608.00, stdev=299.81, samples=2 00:10:22.091 lat (usec) : 1000=0.06% 00:10:22.091 lat (msec) : 2=0.04%, 4=0.32%, 10=10.84%, 20=79.49%, 50=8.96% 00:10:22.091 lat (msec) : 100=0.28% 00:10:22.091 cpu : usr=3.97%, sys=6.55%, ctx=314, majf=0, minf=1 00:10:22.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:22.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.091 issued rwts: total=4340,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.091 job3: (groupid=0, jobs=1): err= 0: pid=2189984: Fri Jul 26 01:46:03 2024 00:10:22.091 read: IOPS=3816, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1005msec) 00:10:22.091 slat (usec): min=3, max=18648, avg=136.82, stdev=989.27 00:10:22.091 clat (usec): min=2987, max=53090, avg=17156.36, stdev=6007.75 00:10:22.091 lat (usec): min=4608, max=53102, avg=17293.18, stdev=6092.05 00:10:22.091 clat percentiles (usec): 00:10:22.091 | 1.00th=[ 7046], 5.00th=[11207], 10.00th=[11994], 20.00th=[13304], 00:10:22.091 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14746], 60.00th=[15664], 00:10:22.091 | 70.00th=[18220], 80.00th=[21365], 90.00th=[26870], 95.00th=[30278], 00:10:22.091 | 99.00th=[39060], 99.50th=[41157], 99.90th=[41157], 99.95th=[43779], 00:10:22.091 | 99.99th=[53216] 00:10:22.091 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:10:22.091 slat (usec): min=4, max=12186, avg=105.16, stdev=658.25 00:10:22.091 clat (usec): min=2690, max=50907, avg=15016.07, stdev=6799.54 00:10:22.091 lat (usec): min=2697, max=50925, avg=15121.23, stdev=6849.04 00:10:22.091 clat percentiles (usec): 00:10:22.091 | 1.00th=[ 4686], 5.00th=[ 8029], 10.00th=[10421], 20.00th=[12125], 00:10:22.091 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[13960], 00:10:22.091 | 70.00th=[14222], 80.00th=[16188], 90.00th=[20317], 95.00th=[27657], 00:10:22.091 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:10:22.091 | 99.99th=[51119] 00:10:22.091 bw ( KiB/s): min=13600, max=19168, per=23.74%, avg=16384.00, stdev=3937.17, samples=2 00:10:22.091 iops : min= 3400, max= 4792, avg=4096.00, stdev=984.29, samples=2 00:10:22.091 lat (msec) : 4=0.37%, 10=5.18%, 20=78.40%, 50=15.48%, 100=0.57% 00:10:22.091 cpu : usr=4.78%, sys=9.76%, ctx=416, majf=0, minf=1 00:10:22.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:22.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.091 issued rwts: total=3836,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.091 00:10:22.091 Run status group 0 (all jobs): 00:10:22.091 READ: bw=61.9MiB/s (64.9MB/s), 12.0MiB/s-18.3MiB/s (12.6MB/s-19.2MB/s), io=62.4MiB (65.5MB), run=1005-1009msec 00:10:22.091 WRITE: bw=67.4MiB/s (70.7MB/s), 13.9MiB/s-19.8MiB/s (14.6MB/s-20.8MB/s), io=68.0MiB (71.3MB), run=1005-1009msec 00:10:22.091 00:10:22.091 Disk stats (read/write): 00:10:22.091 nvme0n1: ios=2470/2560, merge=0/0, ticks=26670/33931, in_queue=60601, util=86.97% 00:10:22.091 nvme0n2: ios=4146/4407, merge=0/0, ticks=23774/27371, in_queue=51145, util=91.57% 00:10:22.091 nvme0n3: ios=3613/4026, merge=0/0, ticks=31592/30028, in_queue=61620, util=97.50% 00:10:22.091 nvme0n4: ios=3501/3584, merge=0/0, ticks=55974/47029, in_queue=103003, util=97.48% 00:10:22.091 01:46:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:22.091 01:46:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2190122 00:10:22.091 01:46:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:22.091 01:46:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:22.091 [global] 00:10:22.091 thread=1 00:10:22.091 invalidate=1 00:10:22.091 rw=read 00:10:22.091 time_based=1 00:10:22.091 runtime=10 00:10:22.091 ioengine=libaio 00:10:22.091 direct=1 00:10:22.091 bs=4096 00:10:22.091 iodepth=1 00:10:22.091 norandommap=1 00:10:22.091 numjobs=1 00:10:22.091 00:10:22.091 [job0] 00:10:22.091 filename=/dev/nvme0n1 00:10:22.091 [job1] 00:10:22.091 filename=/dev/nvme0n2 00:10:22.091 [job2] 00:10:22.091 filename=/dev/nvme0n3 00:10:22.091 [job3] 00:10:22.091 filename=/dev/nvme0n4 00:10:22.091 Could not set queue depth (nvme0n1) 00:10:22.091 Could not set queue depth (nvme0n2) 00:10:22.091 Could not set queue depth (nvme0n3) 00:10:22.091 Could not set queue depth (nvme0n4) 00:10:22.091 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.091 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.091 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.091 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.091 fio-3.35 00:10:22.091 Starting 4 threads 00:10:25.375 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:25.375 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:25.375 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=299008, buflen=4096 00:10:25.375 fio: pid=2190335, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:25.375 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.375 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:25.375 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=35565568, buflen=4096 00:10:25.375 fio: pid=2190334, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:25.633 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=20271104, buflen=4096 00:10:25.633 fio: pid=2190329, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:25.633 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.633 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:25.891 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=46788608, buflen=4096 00:10:25.891 fio: pid=2190333, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:25.891 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.891 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:25.891 00:10:25.891 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2190329: Fri Jul 26 01:46:07 2024 00:10:25.891 read: IOPS=1443, BW=5773KiB/s (5912kB/s)(19.3MiB/3429msec) 00:10:25.891 slat (usec): min=5, max=33430, avg=28.21, stdev=649.72 00:10:25.891 clat (usec): min=218, max=42042, avg=655.86, stdev=3646.42 00:10:25.891 lat (usec): min=224, max=42059, avg=684.07, stdev=3703.55 00:10:25.891 clat percentiles (usec): 00:10:25.891 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 255], 20.00th=[ 277], 00:10:25.891 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 322], 00:10:25.891 | 70.00th=[ 343], 80.00th=[ 383], 90.00th=[ 429], 95.00th=[ 498], 00:10:25.891 | 99.00th=[ 685], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:25.891 | 99.99th=[42206] 00:10:25.891 bw ( KiB/s): min= 96, max=11000, per=18.03%, avg=4906.67, stdev=4177.62, samples=6 00:10:25.891 iops : min= 24, max= 2750, avg=1226.67, stdev=1044.41, samples=6 00:10:25.891 lat (usec) : 250=6.71%, 500=88.40%, 750=4.00%, 1000=0.04% 00:10:25.891 lat (msec) : 2=0.04%, 50=0.79% 00:10:25.891 cpu : usr=1.23%, sys=2.48%, ctx=4958, majf=0, minf=1 00:10:25.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.891 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.891 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.891 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2190333: Fri Jul 26 01:46:07 2024 00:10:25.891 read: IOPS=3093, BW=12.1MiB/s (12.7MB/s)(44.6MiB/3693msec) 00:10:25.891 slat (usec): min=5, max=26969, avg=19.28, stdev=359.61 00:10:25.891 clat (usec): min=218, max=2910, avg=298.96, stdev=66.63 00:10:25.891 lat (usec): min=224, max=27272, avg=318.24, stdev=367.36 00:10:25.891 clat percentiles (usec): 00:10:25.891 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 253], 00:10:25.891 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:10:25.891 | 70.00th=[ 302], 80.00th=[ 326], 90.00th=[ 379], 95.00th=[ 424], 00:10:25.891 | 99.00th=[ 537], 99.50th=[ 594], 99.90th=[ 693], 99.95th=[ 734], 00:10:25.891 | 99.99th=[ 840] 00:10:25.891 bw ( KiB/s): min= 9552, max=13576, per=45.42%, avg=12363.29, stdev=1540.57, samples=7 00:10:25.891 iops : min= 2388, max= 3394, avg=3090.71, stdev=385.06, samples=7 00:10:25.891 lat (usec) : 250=17.03%, 500=81.29%, 750=1.64%, 1000=0.02% 00:10:25.891 lat (msec) : 4=0.01% 00:10:25.891 cpu : usr=2.52%, sys=5.44%, ctx=11431, majf=0, minf=1 00:10:25.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.891 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.891 issued rwts: total=11424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.891 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2190334: Fri Jul 26 01:46:07 2024 00:10:25.891 read: IOPS=2742, BW=10.7MiB/s (11.2MB/s)(33.9MiB/3167msec) 00:10:25.891 slat (usec): min=4, max=16583, avg=19.87, stdev=219.65 00:10:25.891 clat (usec): min=210, max=995, avg=337.70, stdev=55.97 00:10:25.891 lat (usec): min=216, max=16990, avg=357.56, stdev=228.63 00:10:25.891 clat percentiles (usec): 00:10:25.891 | 1.00th=[ 229], 5.00th=[ 249], 10.00th=[ 273], 20.00th=[ 293], 00:10:25.891 | 30.00th=[ 306], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 351], 00:10:25.891 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 404], 95.00th=[ 429], 00:10:25.891 | 99.00th=[ 494], 99.50th=[ 529], 99.90th=[ 652], 99.95th=[ 717], 00:10:25.891 | 99.99th=[ 996] 00:10:25.891 bw ( KiB/s): min= 9728, max=12040, per=40.59%, avg=11046.67, stdev=840.93, samples=6 00:10:25.891 iops : min= 2432, max= 3010, avg=2761.67, stdev=210.23, samples=6 00:10:25.891 lat (usec) : 250=5.09%, 500=94.09%, 750=0.76%, 1000=0.05% 00:10:25.891 cpu : usr=2.15%, sys=5.21%, ctx=8687, majf=0, minf=1 00:10:25.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.891 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.891 issued rwts: total=8684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.892 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2190335: Fri Jul 26 01:46:07 2024 00:10:25.892 read: IOPS=25, BW=101KiB/s (103kB/s)(292KiB/2894msec) 00:10:25.892 slat (nsec): min=9211, max=38394, avg=21526.24, stdev=8823.26 00:10:25.892 clat (usec): min=414, max=42072, avg=39254.82, stdev=9415.34 00:10:25.892 lat (usec): min=429, max=42081, avg=39276.46, stdev=9416.31 00:10:25.892 clat percentiles (usec): 00:10:25.892 | 1.00th=[ 416], 5.00th=[ 545], 10.00th=[41157], 20.00th=[41157], 00:10:25.892 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:10:25.892 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:25.892 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:25.892 | 99.99th=[42206] 00:10:25.892 bw ( KiB/s): min= 96, max= 120, per=0.37%, avg=102.40, stdev=10.43, samples=5 00:10:25.892 iops : min= 24, max= 30, avg=25.60, stdev= 2.61, samples=5 00:10:25.892 lat (usec) : 500=4.05%, 750=1.35% 00:10:25.892 lat (msec) : 50=93.24% 00:10:25.892 cpu : usr=0.10%, sys=0.00%, ctx=74, majf=0, minf=1 00:10:25.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.892 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.892 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.892 00:10:25.892 Run status group 0 (all jobs): 00:10:25.892 READ: bw=26.6MiB/s (27.9MB/s), 101KiB/s-12.1MiB/s (103kB/s-12.7MB/s), io=98.2MiB (103MB), run=2894-3693msec 00:10:25.892 00:10:25.892 Disk stats (read/write): 00:10:25.892 nvme0n1: ios=4799/0, merge=0/0, ticks=3403/0, in_queue=3403, util=97.63% 00:10:25.892 nvme0n2: ios=11191/0, merge=0/0, ticks=4149/0, in_queue=4149, util=97.53% 00:10:25.892 nvme0n3: ios=8569/0, merge=0/0, ticks=2787/0, in_queue=2787, util=95.91% 00:10:25.892 nvme0n4: ios=72/0, merge=0/0, ticks=2827/0, in_queue=2827, util=96.74% 00:10:26.150 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.150 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:26.408 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.408 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:26.666 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.666 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:26.923 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.923 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:27.181 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:27.181 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2190122 00:10:27.181 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:27.181 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.439 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:27.439 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:27.439 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:27.439 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.439 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:27.439 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.439 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:27.439 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:27.439 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:27.439 nvmf hotplug test: fio failed as expected 00:10:27.439 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:27.699 rmmod nvme_tcp 00:10:27.699 rmmod nvme_fabrics 00:10:27.699 rmmod nvme_keyring 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2188205 ']' 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2188205 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2188205 ']' 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2188205 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2188205 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2188205' 00:10:27.699 killing process with pid 2188205 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2188205 00:10:27.699 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2188205 00:10:27.959 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:27.959 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:27.959 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:27.959 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:27.959 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:27.959 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.959 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.959 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:30.542 00:10:30.542 real 0m23.391s 00:10:30.542 user 1m21.092s 00:10:30.542 sys 0m7.419s 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.542 ************************************ 00:10:30.542 END TEST nvmf_fio_target 00:10:30.542 ************************************ 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.542 ************************************ 00:10:30.542 START TEST nvmf_bdevio 00:10:30.542 ************************************ 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:30.542 * Looking for test storage... 00:10:30.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.542 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:30.543 01:46:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:32.448 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:32.448 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:32.448 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:32.448 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.448 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:32.449 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:32.449 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.449 01:46:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:32.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:10:32.449 00:10:32.449 --- 10.0.0.2 ping statistics --- 00:10:32.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.449 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:10:32.449 00:10:32.449 --- 10.0.0.1 ping statistics --- 00:10:32.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.449 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2192922 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2192922 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2192922 ']' 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:32.449 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.449 [2024-07-26 01:46:14.175828] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:10:32.449 [2024-07-26 01:46:14.175920] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.449 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.449 [2024-07-26 01:46:14.247319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.449 [2024-07-26 01:46:14.341991] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.449 [2024-07-26 01:46:14.342077] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.449 [2024-07-26 01:46:14.342097] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.449 [2024-07-26 01:46:14.342121] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.449 [2024-07-26 01:46:14.342134] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.449 [2024-07-26 01:46:14.342221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:32.449 [2024-07-26 01:46:14.342277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:32.449 [2024-07-26 01:46:14.342328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:32.449 [2024-07-26 01:46:14.342331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.710 [2024-07-26 01:46:14.489224] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.710 Malloc0 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.710 [2024-07-26 01:46:14.540171] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:32.710 { 00:10:32.710 "params": { 00:10:32.710 "name": "Nvme$subsystem", 00:10:32.710 "trtype": "$TEST_TRANSPORT", 00:10:32.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:32.710 "adrfam": "ipv4", 00:10:32.710 "trsvcid": "$NVMF_PORT", 00:10:32.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:32.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:32.710 "hdgst": ${hdgst:-false}, 00:10:32.710 "ddgst": ${ddgst:-false} 00:10:32.710 }, 00:10:32.710 "method": "bdev_nvme_attach_controller" 00:10:32.710 } 00:10:32.710 EOF 00:10:32.710 )") 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:32.710 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:32.710 "params": { 00:10:32.710 "name": "Nvme1", 00:10:32.710 "trtype": "tcp", 00:10:32.710 "traddr": "10.0.0.2", 00:10:32.710 "adrfam": "ipv4", 00:10:32.710 "trsvcid": "4420", 00:10:32.710 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:32.710 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:32.710 "hdgst": false, 00:10:32.710 "ddgst": false 00:10:32.710 }, 00:10:32.710 "method": "bdev_nvme_attach_controller" 00:10:32.710 }' 00:10:32.710 [2024-07-26 01:46:14.582612] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:10:32.710 [2024-07-26 01:46:14.582700] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192990 ] 00:10:32.710 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.710 [2024-07-26 01:46:14.645446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:32.970 [2024-07-26 01:46:14.738456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.970 [2024-07-26 01:46:14.738508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.970 [2024-07-26 01:46:14.738511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.228 I/O targets: 00:10:33.228 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:33.228 00:10:33.228 00:10:33.228 CUnit - A unit testing framework for C - Version 2.1-3 00:10:33.228 http://cunit.sourceforge.net/ 00:10:33.228 00:10:33.228 00:10:33.228 Suite: bdevio tests on: Nvme1n1 00:10:33.228 Test: blockdev write read block ...passed 00:10:33.228 Test: blockdev write zeroes read block ...passed 00:10:33.228 Test: blockdev write zeroes read no split ...passed 00:10:33.229 Test: blockdev write zeroes read split ...passed 00:10:33.229 Test: blockdev write zeroes read split partial ...passed 00:10:33.229 Test: blockdev reset ...[2024-07-26 01:46:15.201233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:33.229 [2024-07-26 01:46:15.201343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223ec90 (9): Bad file descriptor 00:10:33.229 [2024-07-26 01:46:15.212241] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:33.229 passed 00:10:33.488 Test: blockdev write read 8 blocks ...passed 00:10:33.488 Test: blockdev write read size > 128k ...passed 00:10:33.488 Test: blockdev write read invalid size ...passed 00:10:33.488 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:33.488 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:33.488 Test: blockdev write read max offset ...passed 00:10:33.488 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:33.488 Test: blockdev writev readv 8 blocks ...passed 00:10:33.488 Test: blockdev writev readv 30 x 1block ...passed 00:10:33.488 Test: blockdev writev readv block ...passed 00:10:33.488 Test: blockdev writev readv size > 128k ...passed 00:10:33.488 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:33.488 Test: blockdev comparev and writev ...[2024-07-26 01:46:15.467786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.488 [2024-07-26 01:46:15.467823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:33.488 [2024-07-26 01:46:15.467847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.488 [2024-07-26 01:46:15.467864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:33.488 [2024-07-26 01:46:15.468272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.488 [2024-07-26 01:46:15.468298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:33.488 [2024-07-26 01:46:15.468321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.488 [2024-07-26 01:46:15.468345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:33.488 [2024-07-26 01:46:15.468728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.488 [2024-07-26 01:46:15.468759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:33.488 [2024-07-26 01:46:15.468781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.488 [2024-07-26 01:46:15.468797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:33.488 [2024-07-26 01:46:15.469184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.488 [2024-07-26 01:46:15.469208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:33.488 [2024-07-26 01:46:15.469230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.488 [2024-07-26 01:46:15.469246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:33.747 passed 00:10:33.747 Test: blockdev nvme passthru rw ...passed 00:10:33.747 Test: blockdev nvme passthru vendor specific ...[2024-07-26 01:46:15.553388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.747 [2024-07-26 01:46:15.553416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:33.747 [2024-07-26 01:46:15.553577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.747 [2024-07-26 01:46:15.553600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:33.747 [2024-07-26 01:46:15.553757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.747 [2024-07-26 01:46:15.553779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:33.747 [2024-07-26 01:46:15.553932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.747 [2024-07-26 01:46:15.553954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:33.747 passed 00:10:33.747 Test: blockdev nvme admin passthru ...passed 00:10:33.747 Test: blockdev copy ...passed 00:10:33.747 00:10:33.747 Run Summary: Type Total Ran Passed Failed Inactive 00:10:33.747 suites 1 1 n/a 0 0 00:10:33.747 tests 23 23 23 0 0 00:10:33.747 asserts 152 152 152 0 n/a 00:10:33.747 00:10:33.747 Elapsed time = 1.142 seconds 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:34.007 rmmod nvme_tcp 00:10:34.007 rmmod nvme_fabrics 00:10:34.007 rmmod nvme_keyring 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2192922 ']' 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2192922 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2192922 ']' 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2192922 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2192922 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2192922' 00:10:34.007 killing process with pid 2192922 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2192922 00:10:34.007 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2192922 00:10:34.266 01:46:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:34.266 01:46:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:34.266 01:46:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:34.266 01:46:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:34.266 01:46:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:34.266 01:46:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.266 01:46:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.266 01:46:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:36.808 00:10:36.808 real 0m6.256s 00:10:36.808 user 0m10.191s 00:10:36.808 sys 0m2.022s 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:36.808 ************************************ 00:10:36.808 END TEST nvmf_bdevio 00:10:36.808 ************************************ 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:36.808 00:10:36.808 real 3m50.248s 00:10:36.808 user 9m55.961s 00:10:36.808 sys 1m7.975s 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:36.808 ************************************ 00:10:36.808 END TEST nvmf_target_core 00:10:36.808 ************************************ 00:10:36.808 01:46:18 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:36.808 01:46:18 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:36.808 01:46:18 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.808 01:46:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:36.808 ************************************ 00:10:36.808 START TEST nvmf_target_extra 00:10:36.808 ************************************ 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:36.808 * Looking for test storage... 00:10:36.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.808 01:46:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:36.809 ************************************ 00:10:36.809 START TEST nvmf_example 00:10:36.809 ************************************ 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:36.809 * Looking for test storage... 00:10:36.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:10:36.809 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:38.716 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:38.716 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:38.717 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:38.717 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:38.717 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:38.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:10:38.717 00:10:38.717 --- 10.0.0.2 ping statistics --- 00:10:38.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.717 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:10:38.717 00:10:38.717 --- 10.0.0.1 ping statistics --- 00:10:38.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.717 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2195111 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2195111 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2195111 ']' 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:38.717 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.718 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.976 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.977 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.977 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.977 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.977 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:38.977 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:38.977 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.186 Initializing NVMe Controllers 00:10:51.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:51.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:51.186 Initialization complete. Launching workers. 00:10:51.186 ======================================================== 00:10:51.186 Latency(us) 00:10:51.186 Device Information : IOPS MiB/s Average min max 00:10:51.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15154.80 59.20 4223.11 763.16 15336.35 00:10:51.186 ======================================================== 00:10:51.186 Total : 15154.80 59.20 4223.11 763.16 15336.35 00:10:51.186 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:51.186 rmmod nvme_tcp 00:10:51.186 rmmod nvme_fabrics 00:10:51.186 rmmod nvme_keyring 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2195111 ']' 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2195111 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2195111 ']' 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2195111 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2195111 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2195111' 00:10:51.186 killing process with pid 2195111 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2195111 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2195111 00:10:51.186 nvmf threads initialize successfully 00:10:51.186 bdev subsystem init successfully 00:10:51.186 created a nvmf target service 00:10:51.186 create targets's poll groups done 00:10:51.186 all subsystems of target started 00:10:51.186 nvmf target is running 00:10:51.186 all subsystems of target stopped 00:10:51.186 destroy targets's poll groups done 00:10:51.186 destroyed the nvmf target service 00:10:51.186 bdev subsystem finish successfully 00:10:51.186 nvmf threads destroy successfully 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.186 01:46:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.445 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:51.445 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:51.445 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:51.445 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.445 00:10:51.445 real 0m15.090s 00:10:51.445 user 0m41.675s 00:10:51.445 sys 0m3.322s 00:10:51.445 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:51.445 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.445 ************************************ 00:10:51.445 END TEST nvmf_example 00:10:51.445 ************************************ 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:51.705 ************************************ 00:10:51.705 START TEST nvmf_filesystem 00:10:51.705 ************************************ 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:51.705 * Looking for test storage... 00:10:51.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:51.705 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:51.706 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:51.706 #define SPDK_CONFIG_H 00:10:51.706 #define SPDK_CONFIG_APPS 1 00:10:51.706 #define SPDK_CONFIG_ARCH native 00:10:51.706 #undef SPDK_CONFIG_ASAN 00:10:51.706 #undef SPDK_CONFIG_AVAHI 00:10:51.706 #undef SPDK_CONFIG_CET 00:10:51.706 #define SPDK_CONFIG_COVERAGE 1 00:10:51.706 #define SPDK_CONFIG_CROSS_PREFIX 00:10:51.706 #undef SPDK_CONFIG_CRYPTO 00:10:51.706 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:51.706 #undef SPDK_CONFIG_CUSTOMOCF 00:10:51.706 #undef SPDK_CONFIG_DAOS 00:10:51.706 #define SPDK_CONFIG_DAOS_DIR 00:10:51.706 #define SPDK_CONFIG_DEBUG 1 00:10:51.706 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:51.706 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:51.706 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:51.706 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:51.706 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:51.706 #undef SPDK_CONFIG_DPDK_UADK 00:10:51.706 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:51.706 #define SPDK_CONFIG_EXAMPLES 1 00:10:51.706 #undef SPDK_CONFIG_FC 00:10:51.706 #define SPDK_CONFIG_FC_PATH 00:10:51.706 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:51.706 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:51.706 #undef SPDK_CONFIG_FUSE 00:10:51.706 #undef SPDK_CONFIG_FUZZER 00:10:51.706 #define SPDK_CONFIG_FUZZER_LIB 00:10:51.706 #undef SPDK_CONFIG_GOLANG 00:10:51.706 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:51.706 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:51.706 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:51.706 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:51.706 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:51.706 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:51.706 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:51.706 #define SPDK_CONFIG_IDXD 1 00:10:51.706 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:51.706 #undef SPDK_CONFIG_IPSEC_MB 00:10:51.706 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:51.706 #define SPDK_CONFIG_ISAL 1 00:10:51.706 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:51.706 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:51.706 #define SPDK_CONFIG_LIBDIR 00:10:51.706 #undef SPDK_CONFIG_LTO 00:10:51.706 #define SPDK_CONFIG_MAX_LCORES 128 00:10:51.706 #define SPDK_CONFIG_NVME_CUSE 1 00:10:51.706 #undef SPDK_CONFIG_OCF 00:10:51.706 #define SPDK_CONFIG_OCF_PATH 00:10:51.706 #define SPDK_CONFIG_OPENSSL_PATH 00:10:51.706 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:51.706 #define SPDK_CONFIG_PGO_DIR 00:10:51.706 #undef SPDK_CONFIG_PGO_USE 00:10:51.706 #define SPDK_CONFIG_PREFIX /usr/local 00:10:51.706 #undef SPDK_CONFIG_RAID5F 00:10:51.706 #undef SPDK_CONFIG_RBD 00:10:51.706 #define SPDK_CONFIG_RDMA 1 00:10:51.706 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:51.706 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:51.706 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:51.706 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:51.706 #define SPDK_CONFIG_SHARED 1 00:10:51.706 #undef SPDK_CONFIG_SMA 00:10:51.706 #define SPDK_CONFIG_TESTS 1 00:10:51.706 #undef SPDK_CONFIG_TSAN 00:10:51.706 #define SPDK_CONFIG_UBLK 1 00:10:51.706 #define SPDK_CONFIG_UBSAN 1 00:10:51.706 #undef SPDK_CONFIG_UNIT_TESTS 00:10:51.706 #undef SPDK_CONFIG_URING 00:10:51.706 #define SPDK_CONFIG_URING_PATH 00:10:51.706 #undef SPDK_CONFIG_URING_ZNS 00:10:51.706 #undef SPDK_CONFIG_USDT 00:10:51.707 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:51.707 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:51.707 #define SPDK_CONFIG_VFIO_USER 1 00:10:51.707 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:51.707 #define SPDK_CONFIG_VHOST 1 00:10:51.707 #define SPDK_CONFIG_VIRTIO 1 00:10:51.707 #undef SPDK_CONFIG_VTUNE 00:10:51.707 #define SPDK_CONFIG_VTUNE_DIR 00:10:51.707 #define SPDK_CONFIG_WERROR 1 00:10:51.707 #define SPDK_CONFIG_WPDK_DIR 00:10:51.707 #undef SPDK_CONFIG_XNVME 00:10:51.707 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:51.707 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v23.11 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:51.708 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 2196795 ]] 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 2196795 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.6a5QEl 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.6a5QEl/tests/target /tmp/spdk.6a5QEl 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=953643008 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330786816 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=52949446656 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994713088 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9045266432 00:10:51.709 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30935175168 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376535040 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398944256 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22409216 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30996758528 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=598016 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199463936 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199468032 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:10:51.710 * Looking for test storage... 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=52949446656 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=11259858944 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.710 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:51.711 01:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:53.609 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:53.609 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:53.609 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.609 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:53.610 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.610 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:53.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:10:53.868 00:10:53.868 --- 10.0.0.2 ping statistics --- 00:10:53.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.868 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:10:53.868 00:10:53.868 --- 10.0.0.1 ping statistics --- 00:10:53.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.868 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:53.868 ************************************ 00:10:53.868 START TEST nvmf_filesystem_no_in_capsule 00:10:53.868 ************************************ 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2198421 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2198421 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2198421 ']' 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.868 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.869 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.869 [2024-07-26 01:46:35.785953] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:10:53.869 [2024-07-26 01:46:35.786037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.869 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.869 [2024-07-26 01:46:35.852414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.127 [2024-07-26 01:46:35.941838] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.127 [2024-07-26 01:46:35.941914] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.127 [2024-07-26 01:46:35.941928] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.127 [2024-07-26 01:46:35.941939] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.127 [2024-07-26 01:46:35.941949] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.127 [2024-07-26 01:46:35.942009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.127 [2024-07-26 01:46:35.942082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.127 [2024-07-26 01:46:35.942140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.127 [2024-07-26 01:46:35.942142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.127 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.127 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:54.127 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:54.127 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:54.127 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.127 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.127 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:54.127 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:54.127 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.127 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.127 [2024-07-26 01:46:36.096585] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.127 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.127 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:54.127 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.127 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.386 Malloc1 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.386 [2024-07-26 01:46:36.282122] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:54.386 { 00:10:54.386 "name": "Malloc1", 00:10:54.386 "aliases": [ 00:10:54.386 "72816c6c-00e7-452e-b218-16d2042fb80f" 00:10:54.386 ], 00:10:54.386 "product_name": "Malloc disk", 00:10:54.386 "block_size": 512, 00:10:54.386 "num_blocks": 1048576, 00:10:54.386 "uuid": "72816c6c-00e7-452e-b218-16d2042fb80f", 00:10:54.386 "assigned_rate_limits": { 00:10:54.386 "rw_ios_per_sec": 0, 00:10:54.386 "rw_mbytes_per_sec": 0, 00:10:54.386 "r_mbytes_per_sec": 0, 00:10:54.386 "w_mbytes_per_sec": 0 00:10:54.386 }, 00:10:54.386 "claimed": true, 00:10:54.386 "claim_type": "exclusive_write", 00:10:54.386 "zoned": false, 00:10:54.386 "supported_io_types": { 00:10:54.386 "read": true, 00:10:54.386 "write": true, 00:10:54.386 "unmap": true, 00:10:54.386 "flush": true, 00:10:54.386 "reset": true, 00:10:54.386 "nvme_admin": false, 00:10:54.386 "nvme_io": false, 00:10:54.386 "nvme_io_md": false, 00:10:54.386 "write_zeroes": true, 00:10:54.386 "zcopy": true, 00:10:54.386 "get_zone_info": false, 00:10:54.386 "zone_management": false, 00:10:54.386 "zone_append": false, 00:10:54.386 "compare": false, 00:10:54.386 "compare_and_write": false, 00:10:54.386 "abort": true, 00:10:54.386 "seek_hole": false, 00:10:54.386 "seek_data": false, 00:10:54.386 "copy": true, 00:10:54.386 "nvme_iov_md": false 00:10:54.386 }, 00:10:54.386 "memory_domains": [ 00:10:54.386 { 00:10:54.386 "dma_device_id": "system", 00:10:54.386 "dma_device_type": 1 00:10:54.386 }, 00:10:54.386 { 00:10:54.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.386 "dma_device_type": 2 00:10:54.386 } 00:10:54.386 ], 00:10:54.386 "driver_specific": {} 00:10:54.386 } 00:10:54.386 ]' 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:54.386 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.320 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:55.320 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:55.320 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.320 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:55.320 01:46:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:57.217 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:57.217 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:57.217 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.217 01:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:57.217 01:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.217 01:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:57.217 01:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:57.217 01:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:57.218 01:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:57.218 01:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:57.218 01:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:57.218 01:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:57.218 01:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:57.218 01:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:57.218 01:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:57.218 01:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:57.218 01:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:57.218 01:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:57.475 01:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:58.853 01:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:58.853 01:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:58.853 01:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:58.853 01:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.853 01:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.853 ************************************ 00:10:58.853 START TEST filesystem_ext4 00:10:58.853 ************************************ 00:10:58.853 01:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:58.853 01:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:58.853 01:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:58.853 01:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:58.853 01:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:58.853 01:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:58.853 01:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:58.853 01:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:58.853 01:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:58.853 01:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:58.853 01:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:58.853 mke2fs 1.46.5 (30-Dec-2021) 00:10:58.853 Discarding device blocks: 0/522240 done 00:10:58.853 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:58.853 Filesystem UUID: bbae8e4a-49e5-410b-b381-bff38f418a9d 00:10:58.853 Superblock backups stored on blocks: 00:10:58.853 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:58.853 00:10:58.853 Allocating group tables: 0/64 done 00:10:58.853 Writing inode tables: 0/64 done 00:11:01.411 Creating journal (8192 blocks): done 00:11:01.411 Writing superblocks and filesystem accounting information: 0/64 done 00:11:01.411 00:11:01.411 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:01.411 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2198421 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:01.668 00:11:01.668 real 0m3.102s 00:11:01.668 user 0m0.013s 00:11:01.668 sys 0m0.059s 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:01.668 ************************************ 00:11:01.668 END TEST filesystem_ext4 00:11:01.668 ************************************ 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.668 ************************************ 00:11:01.668 START TEST filesystem_btrfs 00:11:01.668 ************************************ 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:01.668 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:01.669 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:01.669 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:01.669 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:01.926 btrfs-progs v6.6.2 00:11:01.926 See https://btrfs.readthedocs.io for more information. 00:11:01.926 00:11:01.926 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:01.926 NOTE: several default settings have changed in version 5.15, please make sure 00:11:01.926 this does not affect your deployments: 00:11:01.926 - DUP for metadata (-m dup) 00:11:01.926 - enabled no-holes (-O no-holes) 00:11:01.926 - enabled free-space-tree (-R free-space-tree) 00:11:01.926 00:11:01.926 Label: (null) 00:11:01.926 UUID: 3ce8770e-f52b-4bb6-a80a-1e945d1af976 00:11:01.926 Node size: 16384 00:11:01.926 Sector size: 4096 00:11:01.926 Filesystem size: 510.00MiB 00:11:01.926 Block group profiles: 00:11:01.926 Data: single 8.00MiB 00:11:01.926 Metadata: DUP 32.00MiB 00:11:01.926 System: DUP 8.00MiB 00:11:01.926 SSD detected: yes 00:11:01.926 Zoned device: no 00:11:01.926 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:01.926 Runtime features: free-space-tree 00:11:01.926 Checksum: crc32c 00:11:01.926 Number of devices: 1 00:11:01.926 Devices: 00:11:01.926 ID SIZE PATH 00:11:01.926 1 510.00MiB /dev/nvme0n1p1 00:11:01.926 00:11:01.926 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:01.926 01:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2198421 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:03.297 00:11:03.297 real 0m1.287s 00:11:03.297 user 0m0.024s 00:11:03.297 sys 0m0.102s 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:03.297 ************************************ 00:11:03.297 END TEST filesystem_btrfs 00:11:03.297 ************************************ 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.297 ************************************ 00:11:03.297 START TEST filesystem_xfs 00:11:03.297 ************************************ 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:03.297 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:03.297 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:03.297 = sectsz=512 attr=2, projid32bit=1 00:11:03.297 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:03.297 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:03.297 data = bsize=4096 blocks=130560, imaxpct=25 00:11:03.297 = sunit=0 swidth=0 blks 00:11:03.297 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:03.297 log =internal log bsize=4096 blocks=16384, version=2 00:11:03.297 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:03.297 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:03.861 Discarding blocks...Done. 00:11:03.861 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:03.861 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:06.384 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:06.384 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:06.384 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:06.384 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:06.384 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:06.384 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:06.384 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2198421 00:11:06.384 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:06.384 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:06.384 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:06.384 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:06.384 00:11:06.384 real 0m3.336s 00:11:06.384 user 0m0.015s 00:11:06.384 sys 0m0.065s 00:11:06.384 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:06.384 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:06.384 ************************************ 00:11:06.384 END TEST filesystem_xfs 00:11:06.384 ************************************ 00:11:06.384 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:06.641 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:06.641 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2198421 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2198421 ']' 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2198421 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2198421 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2198421' 00:11:06.899 killing process with pid 2198421 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2198421 00:11:06.899 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2198421 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:07.465 00:11:07.465 real 0m13.454s 00:11:07.465 user 0m51.759s 00:11:07.465 sys 0m1.860s 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.465 ************************************ 00:11:07.465 END TEST nvmf_filesystem_no_in_capsule 00:11:07.465 ************************************ 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:07.465 ************************************ 00:11:07.465 START TEST nvmf_filesystem_in_capsule 00:11:07.465 ************************************ 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2200126 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2200126 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2200126 ']' 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:07.465 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.466 [2024-07-26 01:46:49.293782] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:11:07.466 [2024-07-26 01:46:49.293856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.466 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.466 [2024-07-26 01:46:49.365369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.466 [2024-07-26 01:46:49.457889] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.466 [2024-07-26 01:46:49.457954] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.466 [2024-07-26 01:46:49.457988] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.466 [2024-07-26 01:46:49.458003] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.466 [2024-07-26 01:46:49.458015] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.466 [2024-07-26 01:46:49.458121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.466 [2024-07-26 01:46:49.458189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.466 [2024-07-26 01:46:49.458289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.466 [2024-07-26 01:46:49.458291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.724 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:07.724 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:07.724 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:07.724 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:07.724 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.724 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.724 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:07.724 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:07.724 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.724 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.724 [2024-07-26 01:46:49.602223] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.724 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.724 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:07.724 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.724 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.982 Malloc1 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.982 [2024-07-26 01:46:49.781072] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:07.982 { 00:11:07.982 "name": "Malloc1", 00:11:07.982 "aliases": [ 00:11:07.982 "46d780f8-1976-47eb-bec9-923a8b91dd50" 00:11:07.982 ], 00:11:07.982 "product_name": "Malloc disk", 00:11:07.982 "block_size": 512, 00:11:07.982 "num_blocks": 1048576, 00:11:07.982 "uuid": "46d780f8-1976-47eb-bec9-923a8b91dd50", 00:11:07.982 "assigned_rate_limits": { 00:11:07.982 "rw_ios_per_sec": 0, 00:11:07.982 "rw_mbytes_per_sec": 0, 00:11:07.982 "r_mbytes_per_sec": 0, 00:11:07.982 "w_mbytes_per_sec": 0 00:11:07.982 }, 00:11:07.982 "claimed": true, 00:11:07.982 "claim_type": "exclusive_write", 00:11:07.982 "zoned": false, 00:11:07.982 "supported_io_types": { 00:11:07.982 "read": true, 00:11:07.982 "write": true, 00:11:07.982 "unmap": true, 00:11:07.982 "flush": true, 00:11:07.982 "reset": true, 00:11:07.982 "nvme_admin": false, 00:11:07.982 "nvme_io": false, 00:11:07.982 "nvme_io_md": false, 00:11:07.982 "write_zeroes": true, 00:11:07.982 "zcopy": true, 00:11:07.982 "get_zone_info": false, 00:11:07.982 "zone_management": false, 00:11:07.982 "zone_append": false, 00:11:07.982 "compare": false, 00:11:07.982 "compare_and_write": false, 00:11:07.982 "abort": true, 00:11:07.982 "seek_hole": false, 00:11:07.982 "seek_data": false, 00:11:07.982 "copy": true, 00:11:07.982 "nvme_iov_md": false 00:11:07.982 }, 00:11:07.982 "memory_domains": [ 00:11:07.982 { 00:11:07.982 "dma_device_id": "system", 00:11:07.982 "dma_device_type": 1 00:11:07.982 }, 00:11:07.982 { 00:11:07.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.982 "dma_device_type": 2 00:11:07.982 } 00:11:07.982 ], 00:11:07.982 "driver_specific": {} 00:11:07.982 } 00:11:07.982 ]' 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:07.982 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:08.548 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:08.548 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:08.549 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.549 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:08.549 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:11.074 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:11.074 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:11.074 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:11.074 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:11.075 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:11.075 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:11.075 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:11.075 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:11.075 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:11.075 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:11.075 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:11.075 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:11.075 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:11.075 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:11.075 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:11.075 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:11.075 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:11.075 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:11.638 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:12.569 01:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:12.569 01:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:12.569 01:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:12.569 01:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.569 01:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.569 ************************************ 00:11:12.569 START TEST filesystem_in_capsule_ext4 00:11:12.569 ************************************ 00:11:12.569 01:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:12.569 01:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:12.569 01:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:12.569 01:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:12.569 01:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:12.570 01:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:12.570 01:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:12.570 01:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:12.570 01:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:12.570 01:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:12.570 01:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:12.570 mke2fs 1.46.5 (30-Dec-2021) 00:11:12.827 Discarding device blocks: 0/522240 done 00:11:12.827 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:12.827 Filesystem UUID: 37fc48b6-08e3-49ef-91a2-4bf016aff9b4 00:11:12.827 Superblock backups stored on blocks: 00:11:12.827 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:12.827 00:11:12.827 Allocating group tables: 0/64 done 00:11:12.827 Writing inode tables: 0/64 done 00:11:12.827 Creating journal (8192 blocks): done 00:11:13.341 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:11:13.341 00:11:13.341 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:13.341 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:13.598 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:13.598 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:13.598 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:13.598 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:13.598 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:13.598 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2200126 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:13.856 00:11:13.856 real 0m1.151s 00:11:13.856 user 0m0.024s 00:11:13.856 sys 0m0.055s 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:13.856 ************************************ 00:11:13.856 END TEST filesystem_in_capsule_ext4 00:11:13.856 ************************************ 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.856 ************************************ 00:11:13.856 START TEST filesystem_in_capsule_btrfs 00:11:13.856 ************************************ 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:13.856 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:14.114 btrfs-progs v6.6.2 00:11:14.114 See https://btrfs.readthedocs.io for more information. 00:11:14.114 00:11:14.114 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:14.114 NOTE: several default settings have changed in version 5.15, please make sure 00:11:14.114 this does not affect your deployments: 00:11:14.114 - DUP for metadata (-m dup) 00:11:14.114 - enabled no-holes (-O no-holes) 00:11:14.114 - enabled free-space-tree (-R free-space-tree) 00:11:14.114 00:11:14.114 Label: (null) 00:11:14.114 UUID: d22d34e1-91ef-49ba-a18e-66bf0e8d87bc 00:11:14.114 Node size: 16384 00:11:14.114 Sector size: 4096 00:11:14.114 Filesystem size: 510.00MiB 00:11:14.114 Block group profiles: 00:11:14.114 Data: single 8.00MiB 00:11:14.114 Metadata: DUP 32.00MiB 00:11:14.114 System: DUP 8.00MiB 00:11:14.114 SSD detected: yes 00:11:14.114 Zoned device: no 00:11:14.114 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:14.114 Runtime features: free-space-tree 00:11:14.114 Checksum: crc32c 00:11:14.114 Number of devices: 1 00:11:14.114 Devices: 00:11:14.114 ID SIZE PATH 00:11:14.114 1 510.00MiB /dev/nvme0n1p1 00:11:14.114 00:11:14.114 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:14.114 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:14.679 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:14.679 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:14.679 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:14.679 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:14.679 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:14.679 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:14.679 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2200126 00:11:14.679 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:14.679 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:14.679 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:14.679 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:14.679 00:11:14.679 real 0m1.007s 00:11:14.679 user 0m0.018s 00:11:14.679 sys 0m0.113s 00:11:14.679 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:14.679 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:14.679 ************************************ 00:11:14.679 END TEST filesystem_in_capsule_btrfs 00:11:14.679 ************************************ 00:11:14.940 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:14.940 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:14.940 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.940 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.940 ************************************ 00:11:14.940 START TEST filesystem_in_capsule_xfs 00:11:14.940 ************************************ 00:11:14.940 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:14.940 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:14.940 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:14.940 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:14.940 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:14.940 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:14.940 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:14.940 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:14.940 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:14.940 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:14.940 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:14.940 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:14.940 = sectsz=512 attr=2, projid32bit=1 00:11:14.940 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:14.940 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:14.940 data = bsize=4096 blocks=130560, imaxpct=25 00:11:14.940 = sunit=0 swidth=0 blks 00:11:14.940 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:14.940 log =internal log bsize=4096 blocks=16384, version=2 00:11:14.940 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:14.940 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:15.878 Discarding blocks...Done. 00:11:15.878 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:15.878 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:17.779 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:17.779 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:17.779 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:17.779 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:17.779 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:17.779 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:17.779 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2200126 00:11:17.779 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:17.779 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:17.779 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:17.779 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:17.779 00:11:17.779 real 0m2.854s 00:11:17.779 user 0m0.018s 00:11:17.779 sys 0m0.058s 00:11:17.779 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.779 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:17.779 ************************************ 00:11:17.779 END TEST filesystem_in_capsule_xfs 00:11:17.779 ************************************ 00:11:17.779 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:18.037 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:18.037 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.037 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:18.037 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:18.037 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:18.037 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.037 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:18.037 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.037 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:18.037 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.037 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.037 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.037 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.038 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:18.038 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2200126 00:11:18.038 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2200126 ']' 00:11:18.038 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2200126 00:11:18.038 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:18.038 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:18.038 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2200126 00:11:18.295 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:18.295 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:18.295 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2200126' 00:11:18.295 killing process with pid 2200126 00:11:18.295 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2200126 00:11:18.295 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2200126 00:11:18.554 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:18.554 00:11:18.554 real 0m11.269s 00:11:18.554 user 0m43.108s 00:11:18.554 sys 0m1.784s 00:11:18.554 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.554 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.554 ************************************ 00:11:18.554 END TEST nvmf_filesystem_in_capsule 00:11:18.554 ************************************ 00:11:18.554 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:18.554 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:18.554 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:18.554 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:18.554 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:18.554 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:18.554 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:18.554 rmmod nvme_tcp 00:11:18.554 rmmod nvme_fabrics 00:11:18.813 rmmod nvme_keyring 00:11:18.813 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:18.813 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:18.813 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:18.813 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:18.813 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:18.813 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:18.813 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:18.813 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:18.813 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:18.813 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.813 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.813 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:20.749 00:11:20.749 real 0m29.148s 00:11:20.749 user 1m35.782s 00:11:20.749 sys 0m5.148s 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:20.749 ************************************ 00:11:20.749 END TEST nvmf_filesystem 00:11:20.749 ************************************ 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:20.749 ************************************ 00:11:20.749 START TEST nvmf_target_discovery 00:11:20.749 ************************************ 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:20.749 * Looking for test storage... 00:11:20.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:20.749 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:20.750 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.750 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:20.750 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:20.750 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:20.750 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.750 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.750 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.007 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:21.007 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:21.007 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:21.007 01:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:22.906 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:22.906 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:22.906 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:22.906 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:22.907 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:22.907 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:23.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:11:23.165 00:11:23.165 --- 10.0.0.2 ping statistics --- 00:11:23.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.165 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:11:23.165 00:11:23.165 --- 10.0.0.1 ping statistics --- 00:11:23.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.165 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:23.165 01:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:23.165 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:23.165 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:23.165 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:23.165 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.165 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2203593 00:11:23.165 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:23.165 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2203593 00:11:23.165 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2203593 ']' 00:11:23.165 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.165 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.165 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.165 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.165 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.165 [2024-07-26 01:47:05.071920] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:11:23.165 [2024-07-26 01:47:05.071998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.165 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.165 [2024-07-26 01:47:05.142906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.423 [2024-07-26 01:47:05.237082] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.423 [2024-07-26 01:47:05.237146] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.423 [2024-07-26 01:47:05.237164] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.423 [2024-07-26 01:47:05.237179] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.423 [2024-07-26 01:47:05.237191] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.423 [2024-07-26 01:47:05.237254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.423 [2024-07-26 01:47:05.237311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.423 [2024-07-26 01:47:05.237373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.423 [2024-07-26 01:47:05.237375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.423 [2024-07-26 01:47:05.398565] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.423 Null1 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.423 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:23.424 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.424 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.424 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.424 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:23.424 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.424 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.424 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.682 [2024-07-26 01:47:05.438860] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.682 Null2 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.682 Null3 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.682 Null4 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.682 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:23.683 00:11:23.683 Discovery Log Number of Records 6, Generation counter 6 00:11:23.683 =====Discovery Log Entry 0====== 00:11:23.683 trtype: tcp 00:11:23.683 adrfam: ipv4 00:11:23.683 subtype: current discovery subsystem 00:11:23.683 treq: not required 00:11:23.683 portid: 0 00:11:23.683 trsvcid: 4420 00:11:23.683 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:23.683 traddr: 10.0.0.2 00:11:23.683 eflags: explicit discovery connections, duplicate discovery information 00:11:23.683 sectype: none 00:11:23.683 =====Discovery Log Entry 1====== 00:11:23.683 trtype: tcp 00:11:23.683 adrfam: ipv4 00:11:23.683 subtype: nvme subsystem 00:11:23.683 treq: not required 00:11:23.683 portid: 0 00:11:23.683 trsvcid: 4420 00:11:23.683 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:23.683 traddr: 10.0.0.2 00:11:23.683 eflags: none 00:11:23.683 sectype: none 00:11:23.683 =====Discovery Log Entry 2====== 00:11:23.683 trtype: tcp 00:11:23.683 adrfam: ipv4 00:11:23.683 subtype: nvme subsystem 00:11:23.683 treq: not required 00:11:23.683 portid: 0 00:11:23.683 trsvcid: 4420 00:11:23.683 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:23.683 traddr: 10.0.0.2 00:11:23.683 eflags: none 00:11:23.683 sectype: none 00:11:23.683 =====Discovery Log Entry 3====== 00:11:23.683 trtype: tcp 00:11:23.683 adrfam: ipv4 00:11:23.683 subtype: nvme subsystem 00:11:23.683 treq: not required 00:11:23.683 portid: 0 00:11:23.683 trsvcid: 4420 00:11:23.683 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:23.683 traddr: 10.0.0.2 00:11:23.683 eflags: none 00:11:23.683 sectype: none 00:11:23.683 =====Discovery Log Entry 4====== 00:11:23.683 trtype: tcp 00:11:23.683 adrfam: ipv4 00:11:23.683 subtype: nvme subsystem 00:11:23.683 treq: not required 00:11:23.683 portid: 0 00:11:23.683 trsvcid: 4420 00:11:23.683 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:23.683 traddr: 10.0.0.2 00:11:23.683 eflags: none 00:11:23.683 sectype: none 00:11:23.683 =====Discovery Log Entry 5====== 00:11:23.683 trtype: tcp 00:11:23.683 adrfam: ipv4 00:11:23.683 subtype: discovery subsystem referral 00:11:23.683 treq: not required 00:11:23.683 portid: 0 00:11:23.683 trsvcid: 4430 00:11:23.683 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:23.683 traddr: 10.0.0.2 00:11:23.683 eflags: none 00:11:23.683 sectype: none 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:23.683 Perform nvmf subsystem discovery via RPC 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.683 [ 00:11:23.683 { 00:11:23.683 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:23.683 "subtype": "Discovery", 00:11:23.683 "listen_addresses": [ 00:11:23.683 { 00:11:23.683 "trtype": "TCP", 00:11:23.683 "adrfam": "IPv4", 00:11:23.683 "traddr": "10.0.0.2", 00:11:23.683 "trsvcid": "4420" 00:11:23.683 } 00:11:23.683 ], 00:11:23.683 "allow_any_host": true, 00:11:23.683 "hosts": [] 00:11:23.683 }, 00:11:23.683 { 00:11:23.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:23.683 "subtype": "NVMe", 00:11:23.683 "listen_addresses": [ 00:11:23.683 { 00:11:23.683 "trtype": "TCP", 00:11:23.683 "adrfam": "IPv4", 00:11:23.683 "traddr": "10.0.0.2", 00:11:23.683 "trsvcid": "4420" 00:11:23.683 } 00:11:23.683 ], 00:11:23.683 "allow_any_host": true, 00:11:23.683 "hosts": [], 00:11:23.683 "serial_number": "SPDK00000000000001", 00:11:23.683 "model_number": "SPDK bdev Controller", 00:11:23.683 "max_namespaces": 32, 00:11:23.683 "min_cntlid": 1, 00:11:23.683 "max_cntlid": 65519, 00:11:23.683 "namespaces": [ 00:11:23.683 { 00:11:23.683 "nsid": 1, 00:11:23.683 "bdev_name": "Null1", 00:11:23.683 "name": "Null1", 00:11:23.683 "nguid": "4CE00237CBFC430CAE0157B1D5051159", 00:11:23.683 "uuid": "4ce00237-cbfc-430c-ae01-57b1d5051159" 00:11:23.683 } 00:11:23.683 ] 00:11:23.683 }, 00:11:23.683 { 00:11:23.683 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:23.683 "subtype": "NVMe", 00:11:23.683 "listen_addresses": [ 00:11:23.683 { 00:11:23.683 "trtype": "TCP", 00:11:23.683 "adrfam": "IPv4", 00:11:23.683 "traddr": "10.0.0.2", 00:11:23.683 "trsvcid": "4420" 00:11:23.683 } 00:11:23.683 ], 00:11:23.683 "allow_any_host": true, 00:11:23.683 "hosts": [], 00:11:23.683 "serial_number": "SPDK00000000000002", 00:11:23.683 "model_number": "SPDK bdev Controller", 00:11:23.683 "max_namespaces": 32, 00:11:23.683 "min_cntlid": 1, 00:11:23.683 "max_cntlid": 65519, 00:11:23.683 "namespaces": [ 00:11:23.683 { 00:11:23.683 "nsid": 1, 00:11:23.683 "bdev_name": "Null2", 00:11:23.683 "name": "Null2", 00:11:23.683 "nguid": "CD7FA6A2D84D466EA6AD1A1372564103", 00:11:23.683 "uuid": "cd7fa6a2-d84d-466e-a6ad-1a1372564103" 00:11:23.683 } 00:11:23.683 ] 00:11:23.683 }, 00:11:23.683 { 00:11:23.683 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:23.683 "subtype": "NVMe", 00:11:23.683 "listen_addresses": [ 00:11:23.683 { 00:11:23.683 "trtype": "TCP", 00:11:23.683 "adrfam": "IPv4", 00:11:23.683 "traddr": "10.0.0.2", 00:11:23.683 "trsvcid": "4420" 00:11:23.683 } 00:11:23.683 ], 00:11:23.683 "allow_any_host": true, 00:11:23.683 "hosts": [], 00:11:23.683 "serial_number": "SPDK00000000000003", 00:11:23.683 "model_number": "SPDK bdev Controller", 00:11:23.683 "max_namespaces": 32, 00:11:23.683 "min_cntlid": 1, 00:11:23.683 "max_cntlid": 65519, 00:11:23.683 "namespaces": [ 00:11:23.683 { 00:11:23.683 "nsid": 1, 00:11:23.683 "bdev_name": "Null3", 00:11:23.683 "name": "Null3", 00:11:23.683 "nguid": "7F318B6237264A3C8A4C7EE32E9C0842", 00:11:23.683 "uuid": "7f318b62-3726-4a3c-8a4c-7ee32e9c0842" 00:11:23.683 } 00:11:23.683 ] 00:11:23.683 }, 00:11:23.683 { 00:11:23.683 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:23.683 "subtype": "NVMe", 00:11:23.683 "listen_addresses": [ 00:11:23.683 { 00:11:23.683 "trtype": "TCP", 00:11:23.683 "adrfam": "IPv4", 00:11:23.683 "traddr": "10.0.0.2", 00:11:23.683 "trsvcid": "4420" 00:11:23.683 } 00:11:23.683 ], 00:11:23.683 "allow_any_host": true, 00:11:23.683 "hosts": [], 00:11:23.683 "serial_number": "SPDK00000000000004", 00:11:23.683 "model_number": "SPDK bdev Controller", 00:11:23.683 "max_namespaces": 32, 00:11:23.683 "min_cntlid": 1, 00:11:23.683 "max_cntlid": 65519, 00:11:23.683 "namespaces": [ 00:11:23.683 { 00:11:23.683 "nsid": 1, 00:11:23.683 "bdev_name": "Null4", 00:11:23.683 "name": "Null4", 00:11:23.683 "nguid": "51938BBAF42545E4AE64CE9F80572A8D", 00:11:23.683 "uuid": "51938bba-f425-45e4-ae64-ce9f80572a8d" 00:11:23.683 } 00:11:23.683 ] 00:11:23.683 } 00:11:23.683 ] 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:23.683 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.684 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.684 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.684 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.684 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:23.684 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.684 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.684 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.684 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:23.684 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:23.684 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.684 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.684 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.684 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:23.684 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.684 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:23.941 rmmod nvme_tcp 00:11:23.941 rmmod nvme_fabrics 00:11:23.941 rmmod nvme_keyring 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2203593 ']' 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2203593 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2203593 ']' 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2203593 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2203593 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2203593' 00:11:23.941 killing process with pid 2203593 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2203593 00:11:23.941 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2203593 00:11:24.200 01:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:24.200 01:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:24.200 01:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:24.200 01:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:24.200 01:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:24.200 01:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.200 01:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.200 01:47:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.733 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:26.733 00:11:26.733 real 0m5.436s 00:11:26.733 user 0m4.159s 00:11:26.733 sys 0m1.864s 00:11:26.733 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.733 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.733 ************************************ 00:11:26.733 END TEST nvmf_target_discovery 00:11:26.733 ************************************ 00:11:26.733 01:47:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:26.733 01:47:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:26.733 01:47:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:26.733 01:47:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:26.733 ************************************ 00:11:26.733 START TEST nvmf_referrals 00:11:26.733 ************************************ 00:11:26.733 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:26.733 * Looking for test storage... 00:11:26.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:11:26.734 01:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:28.636 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:28.636 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:28.636 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.636 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:28.637 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:28.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:11:28.637 00:11:28.637 --- 10.0.0.2 ping statistics --- 00:11:28.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.637 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:11:28.637 00:11:28.637 --- 10.0.0.1 ping statistics --- 00:11:28.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.637 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2205682 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2205682 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2205682 ']' 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:28.637 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.637 [2024-07-26 01:47:10.464720] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:11:28.637 [2024-07-26 01:47:10.464820] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.637 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.637 [2024-07-26 01:47:10.544803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.637 [2024-07-26 01:47:10.644374] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.637 [2024-07-26 01:47:10.644439] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.637 [2024-07-26 01:47:10.644456] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.637 [2024-07-26 01:47:10.644470] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.637 [2024-07-26 01:47:10.644482] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.637 [2024-07-26 01:47:10.644543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.637 [2024-07-26 01:47:10.644574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.637 [2024-07-26 01:47:10.644626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.637 [2024-07-26 01:47:10.644628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.896 [2024-07-26 01:47:10.802589] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.896 [2024-07-26 01:47:10.814824] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:28.896 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.897 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.897 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:28.897 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:28.897 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:28.897 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:28.897 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:28.897 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.897 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.897 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:28.897 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.155 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:29.155 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:29.155 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:29.155 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:29.155 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:29.155 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:29.155 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:29.155 01:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:29.155 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:29.422 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:29.423 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:29.423 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:29.423 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:29.423 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:29.423 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:29.423 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:29.682 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:29.682 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:29.682 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:29.682 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:29.682 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:29.682 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:29.682 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:29.682 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:29.682 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:29.682 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:29.682 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:29.683 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:29.683 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:29.941 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:29.941 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:29.941 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.941 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.941 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.941 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:29.941 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:29.941 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:29.941 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.941 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:29.941 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.941 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:29.941 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.942 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:29.942 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:29.942 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:29.942 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:29.942 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:29.942 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:29.942 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:29.942 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:30.200 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:30.200 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:30.200 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:30.200 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:30.200 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:30.200 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.200 01:47:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:30.200 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:30.200 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:30.200 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:30.200 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:30.200 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.200 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:30.458 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:30.458 rmmod nvme_tcp 00:11:30.716 rmmod nvme_fabrics 00:11:30.716 rmmod nvme_keyring 00:11:30.716 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:30.716 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:11:30.716 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:11:30.716 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2205682 ']' 00:11:30.716 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2205682 00:11:30.716 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2205682 ']' 00:11:30.716 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2205682 00:11:30.716 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:30.716 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:30.716 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2205682 00:11:30.716 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:30.716 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:30.716 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2205682' 00:11:30.716 killing process with pid 2205682 00:11:30.716 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2205682 00:11:30.716 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2205682 00:11:30.978 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:30.978 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:30.978 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:30.978 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.978 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:30.978 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.978 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.978 01:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.890 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:32.890 00:11:32.890 real 0m6.620s 00:11:32.890 user 0m9.760s 00:11:32.890 sys 0m2.122s 00:11:32.890 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.890 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.890 ************************************ 00:11:32.890 END TEST nvmf_referrals 00:11:32.890 ************************************ 00:11:32.890 01:47:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:32.890 01:47:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:32.890 01:47:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.890 01:47:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.890 ************************************ 00:11:32.890 START TEST nvmf_connect_disconnect 00:11:32.890 ************************************ 00:11:32.890 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:32.891 * Looking for test storage... 00:11:32.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.891 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.891 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:32.891 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.891 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.891 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.891 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.891 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.891 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.891 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.891 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.891 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.891 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.149 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:33.149 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:33.149 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.149 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.149 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.149 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.149 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:11:33.150 01:47:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:35.053 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:35.053 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:35.053 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:35.054 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:35.054 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:35.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:11:35.054 00:11:35.054 --- 10.0.0.2 ping statistics --- 00:11:35.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.054 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:11:35.054 00:11:35.054 --- 10.0.0.1 ping statistics --- 00:11:35.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.054 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2207965 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2207965 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2207965 ']' 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:35.054 01:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.054 [2024-07-26 01:47:16.962230] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:11:35.055 [2024-07-26 01:47:16.962312] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.055 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.055 [2024-07-26 01:47:17.032942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.314 [2024-07-26 01:47:17.126798] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.314 [2024-07-26 01:47:17.126859] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.314 [2024-07-26 01:47:17.126875] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.314 [2024-07-26 01:47:17.126889] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.314 [2024-07-26 01:47:17.126900] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.314 [2024-07-26 01:47:17.126994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.314 [2024-07-26 01:47:17.127072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.314 [2024-07-26 01:47:17.127126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.314 [2024-07-26 01:47:17.127129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.314 [2024-07-26 01:47:17.279633] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.314 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.575 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.575 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.575 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.575 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.575 [2024-07-26 01:47:17.336388] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.575 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.575 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:35.575 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:35.575 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:35.575 01:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:38.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.221 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:26.221 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:26.221 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:26.221 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:15:26.221 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:26.221 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:15:26.221 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:26.221 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:26.221 rmmod nvme_tcp 00:15:26.221 rmmod nvme_fabrics 00:15:26.221 rmmod nvme_keyring 00:15:26.480 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:26.480 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:15:26.480 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:15:26.480 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2207965 ']' 00:15:26.480 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2207965 00:15:26.480 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2207965 ']' 00:15:26.480 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2207965 00:15:26.480 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:15:26.480 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:26.480 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2207965 00:15:26.480 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:26.480 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:26.480 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2207965' 00:15:26.480 killing process with pid 2207965 00:15:26.480 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2207965 00:15:26.480 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2207965 00:15:26.739 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:26.739 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:26.739 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:26.739 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:26.739 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:26.739 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.739 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.739 01:51:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.645 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:28.645 00:15:28.645 real 3m55.705s 00:15:28.645 user 14m57.867s 00:15:28.645 sys 0m34.707s 00:15:28.645 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:28.645 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:28.645 ************************************ 00:15:28.645 END TEST nvmf_connect_disconnect 00:15:28.645 ************************************ 00:15:28.645 01:51:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:28.645 01:51:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:28.645 01:51:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:28.645 01:51:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:28.645 ************************************ 00:15:28.645 START TEST nvmf_multitarget 00:15:28.645 ************************************ 00:15:28.645 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:28.645 * Looking for test storage... 00:15:28.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:28.645 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.645 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:15:28.933 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:30.837 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:30.837 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:30.838 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:30.838 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:30.838 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:30.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:15:30.838 00:15:30.838 --- 10.0.0.2 ping statistics --- 00:15:30.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.838 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:30.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:15:30.838 00:15:30.838 --- 10.0.0.1 ping statistics --- 00:15:30.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.838 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2239445 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2239445 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2239445 ']' 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.838 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:30.838 [2024-07-26 01:51:12.767988] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:15:30.838 [2024-07-26 01:51:12.768084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.838 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.838 [2024-07-26 01:51:12.833785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:31.096 [2024-07-26 01:51:12.925942] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.096 [2024-07-26 01:51:12.926003] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.096 [2024-07-26 01:51:12.926021] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.096 [2024-07-26 01:51:12.926034] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.096 [2024-07-26 01:51:12.926046] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.096 [2024-07-26 01:51:12.926425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.096 [2024-07-26 01:51:12.926478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.096 [2024-07-26 01:51:12.926602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.096 [2024-07-26 01:51:12.926708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.096 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.096 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:15:31.096 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:31.096 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:31.097 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:31.097 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.097 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:31.097 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:31.097 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:31.355 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:31.355 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:31.355 "nvmf_tgt_1" 00:15:31.355 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:31.613 "nvmf_tgt_2" 00:15:31.613 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:31.613 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:31.613 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:31.613 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:31.871 true 00:15:31.871 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:31.871 true 00:15:31.871 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:31.871 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:32.128 rmmod nvme_tcp 00:15:32.128 rmmod nvme_fabrics 00:15:32.128 rmmod nvme_keyring 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2239445 ']' 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2239445 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2239445 ']' 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2239445 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2239445 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2239445' 00:15:32.128 killing process with pid 2239445 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2239445 00:15:32.128 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2239445 00:15:32.388 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:32.388 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:32.388 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:32.388 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:32.388 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:32.388 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.388 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.388 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.296 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:34.296 00:15:34.296 real 0m5.624s 00:15:34.296 user 0m6.463s 00:15:34.296 sys 0m1.803s 00:15:34.296 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:34.296 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:34.296 ************************************ 00:15:34.296 END TEST nvmf_multitarget 00:15:34.296 ************************************ 00:15:34.296 01:51:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:34.296 01:51:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:34.296 01:51:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.296 01:51:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:34.296 ************************************ 00:15:34.296 START TEST nvmf_rpc 00:15:34.296 ************************************ 00:15:34.296 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:34.557 * Looking for test storage... 00:15:34.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:15:34.557 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:36.459 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:36.459 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.459 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:36.460 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:36.460 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.460 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:36.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:15:36.719 00:15:36.719 --- 10.0.0.2 ping statistics --- 00:15:36.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.719 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:15:36.719 00:15:36.719 --- 10.0.0.1 ping statistics --- 00:15:36.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.719 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2241543 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2241543 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2241543 ']' 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:36.719 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.719 [2024-07-26 01:51:18.615683] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:15:36.719 [2024-07-26 01:51:18.615754] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.719 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.719 [2024-07-26 01:51:18.681876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.978 [2024-07-26 01:51:18.777675] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.978 [2024-07-26 01:51:18.777728] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.978 [2024-07-26 01:51:18.777756] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.978 [2024-07-26 01:51:18.777769] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.978 [2024-07-26 01:51:18.777779] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.978 [2024-07-26 01:51:18.777859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.978 [2024-07-26 01:51:18.777933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.978 [2024-07-26 01:51:18.777978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.978 [2024-07-26 01:51:18.777980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:36.978 "tick_rate": 2700000000, 00:15:36.978 "poll_groups": [ 00:15:36.978 { 00:15:36.978 "name": "nvmf_tgt_poll_group_000", 00:15:36.978 "admin_qpairs": 0, 00:15:36.978 "io_qpairs": 0, 00:15:36.978 "current_admin_qpairs": 0, 00:15:36.978 "current_io_qpairs": 0, 00:15:36.978 "pending_bdev_io": 0, 00:15:36.978 "completed_nvme_io": 0, 00:15:36.978 "transports": [] 00:15:36.978 }, 00:15:36.978 { 00:15:36.978 "name": "nvmf_tgt_poll_group_001", 00:15:36.978 "admin_qpairs": 0, 00:15:36.978 "io_qpairs": 0, 00:15:36.978 "current_admin_qpairs": 0, 00:15:36.978 "current_io_qpairs": 0, 00:15:36.978 "pending_bdev_io": 0, 00:15:36.978 "completed_nvme_io": 0, 00:15:36.978 "transports": [] 00:15:36.978 }, 00:15:36.978 { 00:15:36.978 "name": "nvmf_tgt_poll_group_002", 00:15:36.978 "admin_qpairs": 0, 00:15:36.978 "io_qpairs": 0, 00:15:36.978 "current_admin_qpairs": 0, 00:15:36.978 "current_io_qpairs": 0, 00:15:36.978 "pending_bdev_io": 0, 00:15:36.978 "completed_nvme_io": 0, 00:15:36.978 "transports": [] 00:15:36.978 }, 00:15:36.978 { 00:15:36.978 "name": "nvmf_tgt_poll_group_003", 00:15:36.978 "admin_qpairs": 0, 00:15:36.978 "io_qpairs": 0, 00:15:36.978 "current_admin_qpairs": 0, 00:15:36.978 "current_io_qpairs": 0, 00:15:36.978 "pending_bdev_io": 0, 00:15:36.978 "completed_nvme_io": 0, 00:15:36.978 "transports": [] 00:15:36.978 } 00:15:36.978 ] 00:15:36.978 }' 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:36.978 01:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.237 [2024-07-26 01:51:19.025924] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:37.237 "tick_rate": 2700000000, 00:15:37.237 "poll_groups": [ 00:15:37.237 { 00:15:37.237 "name": "nvmf_tgt_poll_group_000", 00:15:37.237 "admin_qpairs": 0, 00:15:37.237 "io_qpairs": 0, 00:15:37.237 "current_admin_qpairs": 0, 00:15:37.237 "current_io_qpairs": 0, 00:15:37.237 "pending_bdev_io": 0, 00:15:37.237 "completed_nvme_io": 0, 00:15:37.237 "transports": [ 00:15:37.237 { 00:15:37.237 "trtype": "TCP" 00:15:37.237 } 00:15:37.237 ] 00:15:37.237 }, 00:15:37.237 { 00:15:37.237 "name": "nvmf_tgt_poll_group_001", 00:15:37.237 "admin_qpairs": 0, 00:15:37.237 "io_qpairs": 0, 00:15:37.237 "current_admin_qpairs": 0, 00:15:37.237 "current_io_qpairs": 0, 00:15:37.237 "pending_bdev_io": 0, 00:15:37.237 "completed_nvme_io": 0, 00:15:37.237 "transports": [ 00:15:37.237 { 00:15:37.237 "trtype": "TCP" 00:15:37.237 } 00:15:37.237 ] 00:15:37.237 }, 00:15:37.237 { 00:15:37.237 "name": "nvmf_tgt_poll_group_002", 00:15:37.237 "admin_qpairs": 0, 00:15:37.237 "io_qpairs": 0, 00:15:37.237 "current_admin_qpairs": 0, 00:15:37.237 "current_io_qpairs": 0, 00:15:37.237 "pending_bdev_io": 0, 00:15:37.237 "completed_nvme_io": 0, 00:15:37.237 "transports": [ 00:15:37.237 { 00:15:37.237 "trtype": "TCP" 00:15:37.237 } 00:15:37.237 ] 00:15:37.237 }, 00:15:37.237 { 00:15:37.237 "name": "nvmf_tgt_poll_group_003", 00:15:37.237 "admin_qpairs": 0, 00:15:37.237 "io_qpairs": 0, 00:15:37.237 "current_admin_qpairs": 0, 00:15:37.237 "current_io_qpairs": 0, 00:15:37.237 "pending_bdev_io": 0, 00:15:37.237 "completed_nvme_io": 0, 00:15:37.237 "transports": [ 00:15:37.237 { 00:15:37.237 "trtype": "TCP" 00:15:37.237 } 00:15:37.237 ] 00:15:37.237 } 00:15:37.237 ] 00:15:37.237 }' 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.237 Malloc1 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.237 [2024-07-26 01:51:19.179647] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:37.237 [2024-07-26 01:51:19.202083] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:37.237 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:37.237 could not add new controller: failed to write to nvme-fabrics device 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.237 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:38.173 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:38.173 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:38.173 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.173 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:38.173 01:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:40.076 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:40.076 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:40.076 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:40.076 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:40.076 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.076 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:40.076 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:40.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.076 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:40.076 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:40.076 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:40.076 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.076 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:40.076 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.077 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:40.077 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.077 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.077 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:40.077 [2024-07-26 01:51:22.022324] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:40.077 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:40.077 could not add new controller: failed to write to nvme-fabrics device 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.077 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:41.014 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:41.014 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:41.014 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:41.014 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:41.014 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:42.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.916 [2024-07-26 01:51:24.832641] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.916 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:43.485 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:43.485 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:43.485 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:43.485 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:43.485 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:46.018 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:46.018 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:46.018 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:46.018 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:46.018 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:46.018 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:46.018 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:46.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.018 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:46.018 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:46.018 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:46.018 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.019 [2024-07-26 01:51:27.600017] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.019 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:46.278 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:46.278 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:46.278 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:46.278 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:46.278 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:48.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:48.816 [2024-07-26 01:51:30.326888] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:48.816 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.817 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:48.817 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.817 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:48.817 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.817 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:49.082 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:49.083 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:49.083 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:49.083 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:49.083 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:51.015 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:51.015 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:51.015 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:51.015 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:51.015 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:51.015 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:51.015 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:51.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.275 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.276 [2024-07-26 01:51:33.096448] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.276 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.276 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:51.276 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.276 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.276 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.276 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:51.276 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.276 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.276 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.276 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:51.845 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:51.845 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:51.845 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:51.845 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:51.845 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:54.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.382 [2024-07-26 01:51:35.984028] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.382 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.382 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.382 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:54.641 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:54.641 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:54.641 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:54.641 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:54.641 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:57.174 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:57.174 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:57.174 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:57.174 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:57.174 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:57.174 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:57.174 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:57.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.174 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:57.174 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:57.174 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:57.174 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.174 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:57.174 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.174 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:57.174 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:57.174 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 [2024-07-26 01:51:38.755358] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 [2024-07-26 01:51:38.803422] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 [2024-07-26 01:51:38.851575] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 [2024-07-26 01:51:38.899741] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.175 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.176 [2024-07-26 01:51:38.947893] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.176 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:57.176 "tick_rate": 2700000000, 00:15:57.176 "poll_groups": [ 00:15:57.176 { 00:15:57.176 "name": "nvmf_tgt_poll_group_000", 00:15:57.176 "admin_qpairs": 2, 00:15:57.176 "io_qpairs": 84, 00:15:57.176 "current_admin_qpairs": 0, 00:15:57.176 "current_io_qpairs": 0, 00:15:57.176 "pending_bdev_io": 0, 00:15:57.176 "completed_nvme_io": 87, 00:15:57.176 "transports": [ 00:15:57.176 { 00:15:57.176 "trtype": "TCP" 00:15:57.176 } 00:15:57.176 ] 00:15:57.176 }, 00:15:57.176 { 00:15:57.176 "name": "nvmf_tgt_poll_group_001", 00:15:57.176 "admin_qpairs": 2, 00:15:57.176 "io_qpairs": 84, 00:15:57.176 "current_admin_qpairs": 0, 00:15:57.176 "current_io_qpairs": 0, 00:15:57.176 "pending_bdev_io": 0, 00:15:57.176 "completed_nvme_io": 184, 00:15:57.176 "transports": [ 00:15:57.176 { 00:15:57.176 "trtype": "TCP" 00:15:57.176 } 00:15:57.176 ] 00:15:57.176 }, 00:15:57.176 { 00:15:57.176 "name": "nvmf_tgt_poll_group_002", 00:15:57.176 "admin_qpairs": 1, 00:15:57.176 "io_qpairs": 84, 00:15:57.176 "current_admin_qpairs": 0, 00:15:57.176 "current_io_qpairs": 0, 00:15:57.176 "pending_bdev_io": 0, 00:15:57.176 "completed_nvme_io": 197, 00:15:57.176 "transports": [ 00:15:57.176 { 00:15:57.176 "trtype": "TCP" 00:15:57.176 } 00:15:57.176 ] 00:15:57.176 }, 00:15:57.176 { 00:15:57.176 "name": "nvmf_tgt_poll_group_003", 00:15:57.176 "admin_qpairs": 2, 00:15:57.176 "io_qpairs": 84, 00:15:57.176 "current_admin_qpairs": 0, 00:15:57.176 "current_io_qpairs": 0, 00:15:57.176 "pending_bdev_io": 0, 00:15:57.176 "completed_nvme_io": 218, 00:15:57.176 "transports": [ 00:15:57.176 { 00:15:57.176 "trtype": "TCP" 00:15:57.176 } 00:15:57.176 ] 00:15:57.176 } 00:15:57.176 ] 00:15:57.176 }' 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:57.176 rmmod nvme_tcp 00:15:57.176 rmmod nvme_fabrics 00:15:57.176 rmmod nvme_keyring 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2241543 ']' 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2241543 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2241543 ']' 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2241543 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2241543 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2241543' 00:15:57.176 killing process with pid 2241543 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2241543 00:15:57.176 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2241543 00:15:57.435 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:57.435 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:57.435 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:57.435 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:57.435 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:57.435 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.435 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.435 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:59.973 00:15:59.973 real 0m25.193s 00:15:59.973 user 1m21.823s 00:15:59.973 sys 0m4.016s 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.973 ************************************ 00:15:59.973 END TEST nvmf_rpc 00:15:59.973 ************************************ 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:59.973 ************************************ 00:15:59.973 START TEST nvmf_invalid 00:15:59.973 ************************************ 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:59.973 * Looking for test storage... 00:15:59.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.973 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:59.974 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:01.880 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:01.881 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:01.881 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:01.881 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:01.881 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:01.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:16:01.881 00:16:01.881 --- 10.0.0.2 ping statistics --- 00:16:01.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.881 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:16:01.881 00:16:01.881 --- 10.0.0.1 ping statistics --- 00:16:01.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.881 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2246025 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2246025 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2246025 ']' 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:01.881 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:01.881 [2024-07-26 01:51:43.679442] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:16:01.881 [2024-07-26 01:51:43.679533] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.881 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.881 [2024-07-26 01:51:43.756305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.881 [2024-07-26 01:51:43.854404] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.881 [2024-07-26 01:51:43.854477] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.881 [2024-07-26 01:51:43.854494] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.881 [2024-07-26 01:51:43.854508] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.881 [2024-07-26 01:51:43.854519] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.881 [2024-07-26 01:51:43.854580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.882 [2024-07-26 01:51:43.854636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.882 [2024-07-26 01:51:43.854689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.882 [2024-07-26 01:51:43.854692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.140 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:02.140 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:16:02.140 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:02.140 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:02.140 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:02.140 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:02.140 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:02.140 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10240 00:16:02.397 [2024-07-26 01:51:44.245230] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:02.397 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:02.397 { 00:16:02.397 "nqn": "nqn.2016-06.io.spdk:cnode10240", 00:16:02.397 "tgt_name": "foobar", 00:16:02.397 "method": "nvmf_create_subsystem", 00:16:02.397 "req_id": 1 00:16:02.397 } 00:16:02.397 Got JSON-RPC error response 00:16:02.397 response: 00:16:02.397 { 00:16:02.397 "code": -32603, 00:16:02.397 "message": "Unable to find target foobar" 00:16:02.397 }' 00:16:02.397 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:02.397 { 00:16:02.397 "nqn": "nqn.2016-06.io.spdk:cnode10240", 00:16:02.397 "tgt_name": "foobar", 00:16:02.397 "method": "nvmf_create_subsystem", 00:16:02.397 "req_id": 1 00:16:02.397 } 00:16:02.397 Got JSON-RPC error response 00:16:02.397 response: 00:16:02.397 { 00:16:02.397 "code": -32603, 00:16:02.397 "message": "Unable to find target foobar" 00:16:02.397 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:02.397 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:02.397 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3112 00:16:02.655 [2024-07-26 01:51:44.502129] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3112: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:02.655 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:02.655 { 00:16:02.655 "nqn": "nqn.2016-06.io.spdk:cnode3112", 00:16:02.655 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:02.655 "method": "nvmf_create_subsystem", 00:16:02.655 "req_id": 1 00:16:02.655 } 00:16:02.655 Got JSON-RPC error response 00:16:02.655 response: 00:16:02.655 { 00:16:02.655 "code": -32602, 00:16:02.655 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:02.655 }' 00:16:02.655 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:02.655 { 00:16:02.655 "nqn": "nqn.2016-06.io.spdk:cnode3112", 00:16:02.655 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:02.655 "method": "nvmf_create_subsystem", 00:16:02.655 "req_id": 1 00:16:02.655 } 00:16:02.655 Got JSON-RPC error response 00:16:02.655 response: 00:16:02.655 { 00:16:02.655 "code": -32602, 00:16:02.655 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:02.655 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:02.655 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:02.655 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23635 00:16:02.914 [2024-07-26 01:51:44.746946] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23635: invalid model number 'SPDK_Controller' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:02.914 { 00:16:02.914 "nqn": "nqn.2016-06.io.spdk:cnode23635", 00:16:02.914 "model_number": "SPDK_Controller\u001f", 00:16:02.914 "method": "nvmf_create_subsystem", 00:16:02.914 "req_id": 1 00:16:02.914 } 00:16:02.914 Got JSON-RPC error response 00:16:02.914 response: 00:16:02.914 { 00:16:02.914 "code": -32602, 00:16:02.914 "message": "Invalid MN SPDK_Controller\u001f" 00:16:02.914 }' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:02.914 { 00:16:02.914 "nqn": "nqn.2016-06.io.spdk:cnode23635", 00:16:02.914 "model_number": "SPDK_Controller\u001f", 00:16:02.914 "method": "nvmf_create_subsystem", 00:16:02.914 "req_id": 1 00:16:02.914 } 00:16:02.914 Got JSON-RPC error response 00:16:02.914 response: 00:16:02.914 { 00:16:02.914 "code": -32602, 00:16:02.914 "message": "Invalid MN SPDK_Controller\u001f" 00:16:02.914 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:02.914 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ % == \- ]] 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '%25l3;D*7:U` Shq`kLtn' 00:16:02.915 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '%25l3;D*7:U` Shq`kLtn' nqn.2016-06.io.spdk:cnode23316 00:16:03.173 [2024-07-26 01:51:45.064010] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23316: invalid serial number '%25l3;D*7:U` Shq`kLtn' 00:16:03.173 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:03.173 { 00:16:03.173 "nqn": "nqn.2016-06.io.spdk:cnode23316", 00:16:03.173 "serial_number": "%25l3;D*7:U` Shq`kLtn", 00:16:03.173 "method": "nvmf_create_subsystem", 00:16:03.173 "req_id": 1 00:16:03.173 } 00:16:03.173 Got JSON-RPC error response 00:16:03.173 response: 00:16:03.173 { 00:16:03.173 "code": -32602, 00:16:03.173 "message": "Invalid SN %25l3;D*7:U` Shq`kLtn" 00:16:03.173 }' 00:16:03.173 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:03.173 { 00:16:03.173 "nqn": "nqn.2016-06.io.spdk:cnode23316", 00:16:03.173 "serial_number": "%25l3;D*7:U` Shq`kLtn", 00:16:03.173 "method": "nvmf_create_subsystem", 00:16:03.173 "req_id": 1 00:16:03.173 } 00:16:03.173 Got JSON-RPC error response 00:16:03.173 response: 00:16:03.173 { 00:16:03.173 "code": -32602, 00:16:03.173 "message": "Invalid SN %25l3;D*7:U` Shq`kLtn" 00:16:03.173 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:03.173 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:03.173 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:03.173 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:03.173 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:03.173 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:03.173 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.174 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:03.175 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:03.175 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:03.175 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.175 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.175 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:03.175 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:03.175 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:03.175 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.175 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.175 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.433 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ < == \- ]] 00:16:03.434 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ' /dev/null' 00:16:06.017 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.553 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:08.553 00:16:08.553 real 0m8.424s 00:16:08.553 user 0m19.626s 00:16:08.553 sys 0m2.399s 00:16:08.553 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:08.553 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:08.553 ************************************ 00:16:08.553 END TEST nvmf_invalid 00:16:08.553 ************************************ 00:16:08.553 01:51:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:08.553 01:51:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:08.553 01:51:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:08.553 01:51:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:08.553 ************************************ 00:16:08.553 START TEST nvmf_connect_stress 00:16:08.553 ************************************ 00:16:08.553 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:08.553 * Looking for test storage... 00:16:08.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.553 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.553 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:08.553 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.553 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.553 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.553 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.553 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.553 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.553 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.553 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.553 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.553 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.553 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:16:08.554 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.489 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:10.489 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:10.489 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:10.489 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:10.489 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:10.489 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:10.489 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:10.489 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:10.489 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:10.489 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:16:10.489 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:10.490 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:10.490 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:10.490 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:10.490 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:10.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:16:10.490 00:16:10.490 --- 10.0.0.2 ping statistics --- 00:16:10.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.490 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:10.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:16:10.490 00:16:10.490 --- 10.0.0.1 ping statistics --- 00:16:10.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.490 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:10.490 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.491 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2248653 00:16:10.491 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:10.491 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2248653 00:16:10.491 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2248653 ']' 00:16:10.491 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.491 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:10.491 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.491 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:10.491 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.491 [2024-07-26 01:51:52.246678] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:16:10.491 [2024-07-26 01:51:52.246764] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.491 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.491 [2024-07-26 01:51:52.319533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:10.491 [2024-07-26 01:51:52.404542] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.491 [2024-07-26 01:51:52.404608] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.491 [2024-07-26 01:51:52.404621] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.491 [2024-07-26 01:51:52.404632] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.491 [2024-07-26 01:51:52.404656] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.491 [2024-07-26 01:51:52.404749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.491 [2024-07-26 01:51:52.404812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:10.491 [2024-07-26 01:51:52.404814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.750 [2024-07-26 01:51:52.546840] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.750 [2024-07-26 01:51:52.578308] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.750 NULL1 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.750 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2248681 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.751 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.010 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.010 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:11.010 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.010 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.010 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.586 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.586 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:11.586 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.586 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.586 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.842 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.842 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:11.842 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.842 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.842 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:12.100 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.100 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:12.100 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.100 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.100 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:12.357 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.357 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:12.357 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.357 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.357 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:12.617 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.617 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:12.617 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.617 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.617 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.185 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.185 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:13.185 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.185 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.185 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.443 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.443 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:13.443 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.443 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.443 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.702 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.703 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:13.703 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.703 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.703 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.962 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.962 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:13.962 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.962 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.962 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.220 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.220 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:14.220 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.220 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.220 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.786 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.786 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:14.786 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.786 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.786 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.045 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.045 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:15.045 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.045 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.045 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.306 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.306 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:15.306 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.306 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.306 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.565 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.565 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:15.565 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.565 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.565 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.824 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.824 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:15.824 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.824 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.825 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.392 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.392 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:16.392 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.393 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.393 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.650 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.650 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:16.650 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.650 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.650 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.910 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.910 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:16.910 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.910 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.910 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.170 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.170 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:17.170 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.170 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.170 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.429 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.430 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:17.430 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.430 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.430 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.997 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.997 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:17.997 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.997 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.997 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.254 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.254 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:18.254 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.255 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.255 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.510 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.510 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:18.510 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.510 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.510 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.767 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.767 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:18.767 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.767 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.767 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.024 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.024 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:19.024 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.024 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.024 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.588 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.588 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:19.588 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.588 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.588 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.845 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.845 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:19.845 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.845 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.845 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.102 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.102 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:20.102 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.102 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.102 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.358 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.358 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:20.358 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.358 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.358 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.614 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.614 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:20.614 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.614 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.614 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.871 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2248681 00:16:21.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2248681) - No such process 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2248681 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.128 rmmod nvme_tcp 00:16:21.128 rmmod nvme_fabrics 00:16:21.128 rmmod nvme_keyring 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2248653 ']' 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2248653 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2248653 ']' 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2248653 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.128 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2248653 00:16:21.128 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:21.128 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:21.128 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2248653' 00:16:21.128 killing process with pid 2248653 00:16:21.128 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2248653 00:16:21.128 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2248653 00:16:21.387 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:21.387 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:21.387 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:21.387 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.387 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:21.387 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.387 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.387 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.284 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:23.284 00:16:23.284 real 0m15.266s 00:16:23.284 user 0m38.384s 00:16:23.284 sys 0m5.864s 00:16:23.284 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:23.284 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.284 ************************************ 00:16:23.284 END TEST nvmf_connect_stress 00:16:23.284 ************************************ 00:16:23.284 01:52:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:23.284 01:52:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:23.284 01:52:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:23.284 01:52:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:23.542 ************************************ 00:16:23.542 START TEST nvmf_fused_ordering 00:16:23.542 ************************************ 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:23.543 * Looking for test storage... 00:16:23.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:16:23.543 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:25.445 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.445 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:25.446 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:25.446 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:25.446 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:25.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:16:25.446 00:16:25.446 --- 10.0.0.2 ping statistics --- 00:16:25.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.446 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:25.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:16:25.446 00:16:25.446 --- 10.0.0.1 ping statistics --- 00:16:25.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.446 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2251813 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2251813 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2251813 ']' 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.446 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.446 [2024-07-26 01:52:07.450923] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:16:25.446 [2024-07-26 01:52:07.451027] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.705 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.705 [2024-07-26 01:52:07.522891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.705 [2024-07-26 01:52:07.617991] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.705 [2024-07-26 01:52:07.618070] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.705 [2024-07-26 01:52:07.618089] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.705 [2024-07-26 01:52:07.618114] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.705 [2024-07-26 01:52:07.618126] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.705 [2024-07-26 01:52:07.618157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.964 [2024-07-26 01:52:07.756913] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.964 [2024-07-26 01:52:07.773137] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.964 NULL1 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.964 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:25.964 [2024-07-26 01:52:07.818316] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:16:25.964 [2024-07-26 01:52:07.818371] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251958 ] 00:16:25.964 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.234 Attached to nqn.2016-06.io.spdk:cnode1 00:16:26.234 Namespace ID: 1 size: 1GB 00:16:26.234 fused_ordering(0) 00:16:26.234 fused_ordering(1) 00:16:26.234 fused_ordering(2) 00:16:26.234 fused_ordering(3) 00:16:26.234 fused_ordering(4) 00:16:26.234 fused_ordering(5) 00:16:26.234 fused_ordering(6) 00:16:26.234 fused_ordering(7) 00:16:26.234 fused_ordering(8) 00:16:26.234 fused_ordering(9) 00:16:26.234 fused_ordering(10) 00:16:26.234 fused_ordering(11) 00:16:26.234 fused_ordering(12) 00:16:26.234 fused_ordering(13) 00:16:26.234 fused_ordering(14) 00:16:26.234 fused_ordering(15) 00:16:26.234 fused_ordering(16) 00:16:26.234 fused_ordering(17) 00:16:26.234 fused_ordering(18) 00:16:26.234 fused_ordering(19) 00:16:26.234 fused_ordering(20) 00:16:26.234 fused_ordering(21) 00:16:26.234 fused_ordering(22) 00:16:26.234 fused_ordering(23) 00:16:26.234 fused_ordering(24) 00:16:26.234 fused_ordering(25) 00:16:26.234 fused_ordering(26) 00:16:26.234 fused_ordering(27) 00:16:26.234 fused_ordering(28) 00:16:26.234 fused_ordering(29) 00:16:26.234 fused_ordering(30) 00:16:26.234 fused_ordering(31) 00:16:26.234 fused_ordering(32) 00:16:26.234 fused_ordering(33) 00:16:26.234 fused_ordering(34) 00:16:26.234 fused_ordering(35) 00:16:26.234 fused_ordering(36) 00:16:26.234 fused_ordering(37) 00:16:26.235 fused_ordering(38) 00:16:26.235 fused_ordering(39) 00:16:26.235 fused_ordering(40) 00:16:26.235 fused_ordering(41) 00:16:26.235 fused_ordering(42) 00:16:26.235 fused_ordering(43) 00:16:26.235 fused_ordering(44) 00:16:26.235 fused_ordering(45) 00:16:26.235 fused_ordering(46) 00:16:26.235 fused_ordering(47) 00:16:26.235 fused_ordering(48) 00:16:26.235 fused_ordering(49) 00:16:26.235 fused_ordering(50) 00:16:26.235 fused_ordering(51) 00:16:26.235 fused_ordering(52) 00:16:26.235 fused_ordering(53) 00:16:26.235 fused_ordering(54) 00:16:26.235 fused_ordering(55) 00:16:26.235 fused_ordering(56) 00:16:26.235 fused_ordering(57) 00:16:26.235 fused_ordering(58) 00:16:26.235 fused_ordering(59) 00:16:26.235 fused_ordering(60) 00:16:26.235 fused_ordering(61) 00:16:26.235 fused_ordering(62) 00:16:26.235 fused_ordering(63) 00:16:26.235 fused_ordering(64) 00:16:26.235 fused_ordering(65) 00:16:26.235 fused_ordering(66) 00:16:26.235 fused_ordering(67) 00:16:26.235 fused_ordering(68) 00:16:26.235 fused_ordering(69) 00:16:26.235 fused_ordering(70) 00:16:26.235 fused_ordering(71) 00:16:26.235 fused_ordering(72) 00:16:26.235 fused_ordering(73) 00:16:26.235 fused_ordering(74) 00:16:26.235 fused_ordering(75) 00:16:26.235 fused_ordering(76) 00:16:26.235 fused_ordering(77) 00:16:26.235 fused_ordering(78) 00:16:26.235 fused_ordering(79) 00:16:26.235 fused_ordering(80) 00:16:26.235 fused_ordering(81) 00:16:26.235 fused_ordering(82) 00:16:26.235 fused_ordering(83) 00:16:26.235 fused_ordering(84) 00:16:26.235 fused_ordering(85) 00:16:26.235 fused_ordering(86) 00:16:26.235 fused_ordering(87) 00:16:26.235 fused_ordering(88) 00:16:26.235 fused_ordering(89) 00:16:26.235 fused_ordering(90) 00:16:26.235 fused_ordering(91) 00:16:26.235 fused_ordering(92) 00:16:26.235 fused_ordering(93) 00:16:26.235 fused_ordering(94) 00:16:26.235 fused_ordering(95) 00:16:26.235 fused_ordering(96) 00:16:26.235 fused_ordering(97) 00:16:26.235 fused_ordering(98) 00:16:26.235 fused_ordering(99) 00:16:26.235 fused_ordering(100) 00:16:26.235 fused_ordering(101) 00:16:26.235 fused_ordering(102) 00:16:26.235 fused_ordering(103) 00:16:26.235 fused_ordering(104) 00:16:26.235 fused_ordering(105) 00:16:26.235 fused_ordering(106) 00:16:26.235 fused_ordering(107) 00:16:26.235 fused_ordering(108) 00:16:26.235 fused_ordering(109) 00:16:26.235 fused_ordering(110) 00:16:26.235 fused_ordering(111) 00:16:26.235 fused_ordering(112) 00:16:26.235 fused_ordering(113) 00:16:26.235 fused_ordering(114) 00:16:26.235 fused_ordering(115) 00:16:26.235 fused_ordering(116) 00:16:26.235 fused_ordering(117) 00:16:26.235 fused_ordering(118) 00:16:26.235 fused_ordering(119) 00:16:26.235 fused_ordering(120) 00:16:26.235 fused_ordering(121) 00:16:26.235 fused_ordering(122) 00:16:26.235 fused_ordering(123) 00:16:26.235 fused_ordering(124) 00:16:26.235 fused_ordering(125) 00:16:26.235 fused_ordering(126) 00:16:26.235 fused_ordering(127) 00:16:26.235 fused_ordering(128) 00:16:26.235 fused_ordering(129) 00:16:26.235 fused_ordering(130) 00:16:26.235 fused_ordering(131) 00:16:26.235 fused_ordering(132) 00:16:26.235 fused_ordering(133) 00:16:26.235 fused_ordering(134) 00:16:26.235 fused_ordering(135) 00:16:26.235 fused_ordering(136) 00:16:26.235 fused_ordering(137) 00:16:26.235 fused_ordering(138) 00:16:26.235 fused_ordering(139) 00:16:26.235 fused_ordering(140) 00:16:26.235 fused_ordering(141) 00:16:26.235 fused_ordering(142) 00:16:26.235 fused_ordering(143) 00:16:26.235 fused_ordering(144) 00:16:26.235 fused_ordering(145) 00:16:26.235 fused_ordering(146) 00:16:26.235 fused_ordering(147) 00:16:26.235 fused_ordering(148) 00:16:26.235 fused_ordering(149) 00:16:26.235 fused_ordering(150) 00:16:26.235 fused_ordering(151) 00:16:26.235 fused_ordering(152) 00:16:26.235 fused_ordering(153) 00:16:26.235 fused_ordering(154) 00:16:26.235 fused_ordering(155) 00:16:26.235 fused_ordering(156) 00:16:26.235 fused_ordering(157) 00:16:26.235 fused_ordering(158) 00:16:26.235 fused_ordering(159) 00:16:26.235 fused_ordering(160) 00:16:26.235 fused_ordering(161) 00:16:26.235 fused_ordering(162) 00:16:26.235 fused_ordering(163) 00:16:26.235 fused_ordering(164) 00:16:26.235 fused_ordering(165) 00:16:26.235 fused_ordering(166) 00:16:26.235 fused_ordering(167) 00:16:26.235 fused_ordering(168) 00:16:26.235 fused_ordering(169) 00:16:26.235 fused_ordering(170) 00:16:26.235 fused_ordering(171) 00:16:26.235 fused_ordering(172) 00:16:26.235 fused_ordering(173) 00:16:26.235 fused_ordering(174) 00:16:26.235 fused_ordering(175) 00:16:26.235 fused_ordering(176) 00:16:26.235 fused_ordering(177) 00:16:26.235 fused_ordering(178) 00:16:26.235 fused_ordering(179) 00:16:26.235 fused_ordering(180) 00:16:26.235 fused_ordering(181) 00:16:26.235 fused_ordering(182) 00:16:26.235 fused_ordering(183) 00:16:26.235 fused_ordering(184) 00:16:26.235 fused_ordering(185) 00:16:26.235 fused_ordering(186) 00:16:26.235 fused_ordering(187) 00:16:26.235 fused_ordering(188) 00:16:26.235 fused_ordering(189) 00:16:26.235 fused_ordering(190) 00:16:26.235 fused_ordering(191) 00:16:26.235 fused_ordering(192) 00:16:26.235 fused_ordering(193) 00:16:26.235 fused_ordering(194) 00:16:26.235 fused_ordering(195) 00:16:26.235 fused_ordering(196) 00:16:26.235 fused_ordering(197) 00:16:26.235 fused_ordering(198) 00:16:26.235 fused_ordering(199) 00:16:26.235 fused_ordering(200) 00:16:26.235 fused_ordering(201) 00:16:26.235 fused_ordering(202) 00:16:26.235 fused_ordering(203) 00:16:26.235 fused_ordering(204) 00:16:26.235 fused_ordering(205) 00:16:26.824 fused_ordering(206) 00:16:26.824 fused_ordering(207) 00:16:26.824 fused_ordering(208) 00:16:26.824 fused_ordering(209) 00:16:26.824 fused_ordering(210) 00:16:26.824 fused_ordering(211) 00:16:26.824 fused_ordering(212) 00:16:26.824 fused_ordering(213) 00:16:26.824 fused_ordering(214) 00:16:26.824 fused_ordering(215) 00:16:26.824 fused_ordering(216) 00:16:26.824 fused_ordering(217) 00:16:26.824 fused_ordering(218) 00:16:26.824 fused_ordering(219) 00:16:26.824 fused_ordering(220) 00:16:26.824 fused_ordering(221) 00:16:26.824 fused_ordering(222) 00:16:26.824 fused_ordering(223) 00:16:26.824 fused_ordering(224) 00:16:26.824 fused_ordering(225) 00:16:26.824 fused_ordering(226) 00:16:26.824 fused_ordering(227) 00:16:26.824 fused_ordering(228) 00:16:26.824 fused_ordering(229) 00:16:26.824 fused_ordering(230) 00:16:26.824 fused_ordering(231) 00:16:26.824 fused_ordering(232) 00:16:26.824 fused_ordering(233) 00:16:26.824 fused_ordering(234) 00:16:26.824 fused_ordering(235) 00:16:26.824 fused_ordering(236) 00:16:26.824 fused_ordering(237) 00:16:26.824 fused_ordering(238) 00:16:26.824 fused_ordering(239) 00:16:26.824 fused_ordering(240) 00:16:26.824 fused_ordering(241) 00:16:26.824 fused_ordering(242) 00:16:26.824 fused_ordering(243) 00:16:26.824 fused_ordering(244) 00:16:26.824 fused_ordering(245) 00:16:26.824 fused_ordering(246) 00:16:26.824 fused_ordering(247) 00:16:26.824 fused_ordering(248) 00:16:26.824 fused_ordering(249) 00:16:26.824 fused_ordering(250) 00:16:26.824 fused_ordering(251) 00:16:26.824 fused_ordering(252) 00:16:26.824 fused_ordering(253) 00:16:26.824 fused_ordering(254) 00:16:26.824 fused_ordering(255) 00:16:26.824 fused_ordering(256) 00:16:26.824 fused_ordering(257) 00:16:26.824 fused_ordering(258) 00:16:26.824 fused_ordering(259) 00:16:26.824 fused_ordering(260) 00:16:26.824 fused_ordering(261) 00:16:26.824 fused_ordering(262) 00:16:26.824 fused_ordering(263) 00:16:26.824 fused_ordering(264) 00:16:26.824 fused_ordering(265) 00:16:26.824 fused_ordering(266) 00:16:26.824 fused_ordering(267) 00:16:26.824 fused_ordering(268) 00:16:26.824 fused_ordering(269) 00:16:26.824 fused_ordering(270) 00:16:26.824 fused_ordering(271) 00:16:26.824 fused_ordering(272) 00:16:26.824 fused_ordering(273) 00:16:26.824 fused_ordering(274) 00:16:26.824 fused_ordering(275) 00:16:26.824 fused_ordering(276) 00:16:26.824 fused_ordering(277) 00:16:26.824 fused_ordering(278) 00:16:26.824 fused_ordering(279) 00:16:26.824 fused_ordering(280) 00:16:26.824 fused_ordering(281) 00:16:26.824 fused_ordering(282) 00:16:26.824 fused_ordering(283) 00:16:26.824 fused_ordering(284) 00:16:26.824 fused_ordering(285) 00:16:26.824 fused_ordering(286) 00:16:26.824 fused_ordering(287) 00:16:26.824 fused_ordering(288) 00:16:26.824 fused_ordering(289) 00:16:26.824 fused_ordering(290) 00:16:26.824 fused_ordering(291) 00:16:26.824 fused_ordering(292) 00:16:26.824 fused_ordering(293) 00:16:26.824 fused_ordering(294) 00:16:26.824 fused_ordering(295) 00:16:26.824 fused_ordering(296) 00:16:26.824 fused_ordering(297) 00:16:26.824 fused_ordering(298) 00:16:26.824 fused_ordering(299) 00:16:26.824 fused_ordering(300) 00:16:26.824 fused_ordering(301) 00:16:26.824 fused_ordering(302) 00:16:26.824 fused_ordering(303) 00:16:26.824 fused_ordering(304) 00:16:26.824 fused_ordering(305) 00:16:26.824 fused_ordering(306) 00:16:26.824 fused_ordering(307) 00:16:26.824 fused_ordering(308) 00:16:26.824 fused_ordering(309) 00:16:26.824 fused_ordering(310) 00:16:26.824 fused_ordering(311) 00:16:26.824 fused_ordering(312) 00:16:26.825 fused_ordering(313) 00:16:26.825 fused_ordering(314) 00:16:26.825 fused_ordering(315) 00:16:26.825 fused_ordering(316) 00:16:26.825 fused_ordering(317) 00:16:26.825 fused_ordering(318) 00:16:26.825 fused_ordering(319) 00:16:26.825 fused_ordering(320) 00:16:26.825 fused_ordering(321) 00:16:26.825 fused_ordering(322) 00:16:26.825 fused_ordering(323) 00:16:26.825 fused_ordering(324) 00:16:26.825 fused_ordering(325) 00:16:26.825 fused_ordering(326) 00:16:26.825 fused_ordering(327) 00:16:26.825 fused_ordering(328) 00:16:26.825 fused_ordering(329) 00:16:26.825 fused_ordering(330) 00:16:26.825 fused_ordering(331) 00:16:26.825 fused_ordering(332) 00:16:26.825 fused_ordering(333) 00:16:26.825 fused_ordering(334) 00:16:26.825 fused_ordering(335) 00:16:26.825 fused_ordering(336) 00:16:26.825 fused_ordering(337) 00:16:26.825 fused_ordering(338) 00:16:26.825 fused_ordering(339) 00:16:26.825 fused_ordering(340) 00:16:26.825 fused_ordering(341) 00:16:26.825 fused_ordering(342) 00:16:26.825 fused_ordering(343) 00:16:26.825 fused_ordering(344) 00:16:26.825 fused_ordering(345) 00:16:26.825 fused_ordering(346) 00:16:26.825 fused_ordering(347) 00:16:26.825 fused_ordering(348) 00:16:26.825 fused_ordering(349) 00:16:26.825 fused_ordering(350) 00:16:26.825 fused_ordering(351) 00:16:26.825 fused_ordering(352) 00:16:26.825 fused_ordering(353) 00:16:26.825 fused_ordering(354) 00:16:26.825 fused_ordering(355) 00:16:26.825 fused_ordering(356) 00:16:26.825 fused_ordering(357) 00:16:26.825 fused_ordering(358) 00:16:26.825 fused_ordering(359) 00:16:26.825 fused_ordering(360) 00:16:26.825 fused_ordering(361) 00:16:26.825 fused_ordering(362) 00:16:26.825 fused_ordering(363) 00:16:26.825 fused_ordering(364) 00:16:26.825 fused_ordering(365) 00:16:26.825 fused_ordering(366) 00:16:26.825 fused_ordering(367) 00:16:26.825 fused_ordering(368) 00:16:26.825 fused_ordering(369) 00:16:26.825 fused_ordering(370) 00:16:26.825 fused_ordering(371) 00:16:26.825 fused_ordering(372) 00:16:26.825 fused_ordering(373) 00:16:26.825 fused_ordering(374) 00:16:26.825 fused_ordering(375) 00:16:26.825 fused_ordering(376) 00:16:26.825 fused_ordering(377) 00:16:26.825 fused_ordering(378) 00:16:26.825 fused_ordering(379) 00:16:26.825 fused_ordering(380) 00:16:26.825 fused_ordering(381) 00:16:26.825 fused_ordering(382) 00:16:26.825 fused_ordering(383) 00:16:26.825 fused_ordering(384) 00:16:26.825 fused_ordering(385) 00:16:26.825 fused_ordering(386) 00:16:26.825 fused_ordering(387) 00:16:26.825 fused_ordering(388) 00:16:26.825 fused_ordering(389) 00:16:26.825 fused_ordering(390) 00:16:26.825 fused_ordering(391) 00:16:26.825 fused_ordering(392) 00:16:26.825 fused_ordering(393) 00:16:26.825 fused_ordering(394) 00:16:26.825 fused_ordering(395) 00:16:26.825 fused_ordering(396) 00:16:26.825 fused_ordering(397) 00:16:26.825 fused_ordering(398) 00:16:26.825 fused_ordering(399) 00:16:26.825 fused_ordering(400) 00:16:26.825 fused_ordering(401) 00:16:26.825 fused_ordering(402) 00:16:26.825 fused_ordering(403) 00:16:26.825 fused_ordering(404) 00:16:26.825 fused_ordering(405) 00:16:26.825 fused_ordering(406) 00:16:26.825 fused_ordering(407) 00:16:26.825 fused_ordering(408) 00:16:26.825 fused_ordering(409) 00:16:26.825 fused_ordering(410) 00:16:27.391 fused_ordering(411) 00:16:27.391 fused_ordering(412) 00:16:27.391 fused_ordering(413) 00:16:27.391 fused_ordering(414) 00:16:27.391 fused_ordering(415) 00:16:27.391 fused_ordering(416) 00:16:27.391 fused_ordering(417) 00:16:27.391 fused_ordering(418) 00:16:27.391 fused_ordering(419) 00:16:27.391 fused_ordering(420) 00:16:27.391 fused_ordering(421) 00:16:27.391 fused_ordering(422) 00:16:27.391 fused_ordering(423) 00:16:27.391 fused_ordering(424) 00:16:27.391 fused_ordering(425) 00:16:27.391 fused_ordering(426) 00:16:27.391 fused_ordering(427) 00:16:27.391 fused_ordering(428) 00:16:27.391 fused_ordering(429) 00:16:27.391 fused_ordering(430) 00:16:27.391 fused_ordering(431) 00:16:27.391 fused_ordering(432) 00:16:27.391 fused_ordering(433) 00:16:27.391 fused_ordering(434) 00:16:27.391 fused_ordering(435) 00:16:27.391 fused_ordering(436) 00:16:27.391 fused_ordering(437) 00:16:27.391 fused_ordering(438) 00:16:27.391 fused_ordering(439) 00:16:27.391 fused_ordering(440) 00:16:27.391 fused_ordering(441) 00:16:27.391 fused_ordering(442) 00:16:27.391 fused_ordering(443) 00:16:27.391 fused_ordering(444) 00:16:27.391 fused_ordering(445) 00:16:27.391 fused_ordering(446) 00:16:27.391 fused_ordering(447) 00:16:27.391 fused_ordering(448) 00:16:27.391 fused_ordering(449) 00:16:27.391 fused_ordering(450) 00:16:27.391 fused_ordering(451) 00:16:27.391 fused_ordering(452) 00:16:27.391 fused_ordering(453) 00:16:27.391 fused_ordering(454) 00:16:27.391 fused_ordering(455) 00:16:27.391 fused_ordering(456) 00:16:27.391 fused_ordering(457) 00:16:27.391 fused_ordering(458) 00:16:27.391 fused_ordering(459) 00:16:27.391 fused_ordering(460) 00:16:27.391 fused_ordering(461) 00:16:27.391 fused_ordering(462) 00:16:27.391 fused_ordering(463) 00:16:27.391 fused_ordering(464) 00:16:27.391 fused_ordering(465) 00:16:27.391 fused_ordering(466) 00:16:27.391 fused_ordering(467) 00:16:27.391 fused_ordering(468) 00:16:27.391 fused_ordering(469) 00:16:27.391 fused_ordering(470) 00:16:27.391 fused_ordering(471) 00:16:27.391 fused_ordering(472) 00:16:27.391 fused_ordering(473) 00:16:27.391 fused_ordering(474) 00:16:27.391 fused_ordering(475) 00:16:27.391 fused_ordering(476) 00:16:27.391 fused_ordering(477) 00:16:27.391 fused_ordering(478) 00:16:27.391 fused_ordering(479) 00:16:27.391 fused_ordering(480) 00:16:27.391 fused_ordering(481) 00:16:27.391 fused_ordering(482) 00:16:27.391 fused_ordering(483) 00:16:27.391 fused_ordering(484) 00:16:27.391 fused_ordering(485) 00:16:27.391 fused_ordering(486) 00:16:27.391 fused_ordering(487) 00:16:27.391 fused_ordering(488) 00:16:27.391 fused_ordering(489) 00:16:27.391 fused_ordering(490) 00:16:27.391 fused_ordering(491) 00:16:27.391 fused_ordering(492) 00:16:27.391 fused_ordering(493) 00:16:27.391 fused_ordering(494) 00:16:27.391 fused_ordering(495) 00:16:27.391 fused_ordering(496) 00:16:27.391 fused_ordering(497) 00:16:27.391 fused_ordering(498) 00:16:27.391 fused_ordering(499) 00:16:27.391 fused_ordering(500) 00:16:27.391 fused_ordering(501) 00:16:27.391 fused_ordering(502) 00:16:27.391 fused_ordering(503) 00:16:27.391 fused_ordering(504) 00:16:27.391 fused_ordering(505) 00:16:27.391 fused_ordering(506) 00:16:27.391 fused_ordering(507) 00:16:27.391 fused_ordering(508) 00:16:27.392 fused_ordering(509) 00:16:27.392 fused_ordering(510) 00:16:27.392 fused_ordering(511) 00:16:27.392 fused_ordering(512) 00:16:27.392 fused_ordering(513) 00:16:27.392 fused_ordering(514) 00:16:27.392 fused_ordering(515) 00:16:27.392 fused_ordering(516) 00:16:27.392 fused_ordering(517) 00:16:27.392 fused_ordering(518) 00:16:27.392 fused_ordering(519) 00:16:27.392 fused_ordering(520) 00:16:27.392 fused_ordering(521) 00:16:27.392 fused_ordering(522) 00:16:27.392 fused_ordering(523) 00:16:27.392 fused_ordering(524) 00:16:27.392 fused_ordering(525) 00:16:27.392 fused_ordering(526) 00:16:27.392 fused_ordering(527) 00:16:27.392 fused_ordering(528) 00:16:27.392 fused_ordering(529) 00:16:27.392 fused_ordering(530) 00:16:27.392 fused_ordering(531) 00:16:27.392 fused_ordering(532) 00:16:27.392 fused_ordering(533) 00:16:27.392 fused_ordering(534) 00:16:27.392 fused_ordering(535) 00:16:27.392 fused_ordering(536) 00:16:27.392 fused_ordering(537) 00:16:27.392 fused_ordering(538) 00:16:27.392 fused_ordering(539) 00:16:27.392 fused_ordering(540) 00:16:27.392 fused_ordering(541) 00:16:27.392 fused_ordering(542) 00:16:27.392 fused_ordering(543) 00:16:27.392 fused_ordering(544) 00:16:27.392 fused_ordering(545) 00:16:27.392 fused_ordering(546) 00:16:27.392 fused_ordering(547) 00:16:27.392 fused_ordering(548) 00:16:27.392 fused_ordering(549) 00:16:27.392 fused_ordering(550) 00:16:27.392 fused_ordering(551) 00:16:27.392 fused_ordering(552) 00:16:27.392 fused_ordering(553) 00:16:27.392 fused_ordering(554) 00:16:27.392 fused_ordering(555) 00:16:27.392 fused_ordering(556) 00:16:27.392 fused_ordering(557) 00:16:27.392 fused_ordering(558) 00:16:27.392 fused_ordering(559) 00:16:27.392 fused_ordering(560) 00:16:27.392 fused_ordering(561) 00:16:27.392 fused_ordering(562) 00:16:27.392 fused_ordering(563) 00:16:27.392 fused_ordering(564) 00:16:27.392 fused_ordering(565) 00:16:27.392 fused_ordering(566) 00:16:27.392 fused_ordering(567) 00:16:27.392 fused_ordering(568) 00:16:27.392 fused_ordering(569) 00:16:27.392 fused_ordering(570) 00:16:27.392 fused_ordering(571) 00:16:27.392 fused_ordering(572) 00:16:27.392 fused_ordering(573) 00:16:27.392 fused_ordering(574) 00:16:27.392 fused_ordering(575) 00:16:27.392 fused_ordering(576) 00:16:27.392 fused_ordering(577) 00:16:27.392 fused_ordering(578) 00:16:27.392 fused_ordering(579) 00:16:27.392 fused_ordering(580) 00:16:27.392 fused_ordering(581) 00:16:27.392 fused_ordering(582) 00:16:27.392 fused_ordering(583) 00:16:27.392 fused_ordering(584) 00:16:27.392 fused_ordering(585) 00:16:27.392 fused_ordering(586) 00:16:27.392 fused_ordering(587) 00:16:27.392 fused_ordering(588) 00:16:27.392 fused_ordering(589) 00:16:27.392 fused_ordering(590) 00:16:27.392 fused_ordering(591) 00:16:27.392 fused_ordering(592) 00:16:27.392 fused_ordering(593) 00:16:27.392 fused_ordering(594) 00:16:27.392 fused_ordering(595) 00:16:27.392 fused_ordering(596) 00:16:27.392 fused_ordering(597) 00:16:27.392 fused_ordering(598) 00:16:27.392 fused_ordering(599) 00:16:27.392 fused_ordering(600) 00:16:27.392 fused_ordering(601) 00:16:27.392 fused_ordering(602) 00:16:27.392 fused_ordering(603) 00:16:27.392 fused_ordering(604) 00:16:27.392 fused_ordering(605) 00:16:27.392 fused_ordering(606) 00:16:27.392 fused_ordering(607) 00:16:27.392 fused_ordering(608) 00:16:27.392 fused_ordering(609) 00:16:27.392 fused_ordering(610) 00:16:27.392 fused_ordering(611) 00:16:27.392 fused_ordering(612) 00:16:27.392 fused_ordering(613) 00:16:27.392 fused_ordering(614) 00:16:27.392 fused_ordering(615) 00:16:27.959 fused_ordering(616) 00:16:27.959 fused_ordering(617) 00:16:27.959 fused_ordering(618) 00:16:27.959 fused_ordering(619) 00:16:27.959 fused_ordering(620) 00:16:27.959 fused_ordering(621) 00:16:27.959 fused_ordering(622) 00:16:27.959 fused_ordering(623) 00:16:27.959 fused_ordering(624) 00:16:27.959 fused_ordering(625) 00:16:27.959 fused_ordering(626) 00:16:27.959 fused_ordering(627) 00:16:27.959 fused_ordering(628) 00:16:27.959 fused_ordering(629) 00:16:27.959 fused_ordering(630) 00:16:27.959 fused_ordering(631) 00:16:27.959 fused_ordering(632) 00:16:27.959 fused_ordering(633) 00:16:27.959 fused_ordering(634) 00:16:27.959 fused_ordering(635) 00:16:27.959 fused_ordering(636) 00:16:27.959 fused_ordering(637) 00:16:27.959 fused_ordering(638) 00:16:27.959 fused_ordering(639) 00:16:27.959 fused_ordering(640) 00:16:27.959 fused_ordering(641) 00:16:27.959 fused_ordering(642) 00:16:27.959 fused_ordering(643) 00:16:27.959 fused_ordering(644) 00:16:27.959 fused_ordering(645) 00:16:27.959 fused_ordering(646) 00:16:27.959 fused_ordering(647) 00:16:27.959 fused_ordering(648) 00:16:27.959 fused_ordering(649) 00:16:27.959 fused_ordering(650) 00:16:27.959 fused_ordering(651) 00:16:27.959 fused_ordering(652) 00:16:27.959 fused_ordering(653) 00:16:27.959 fused_ordering(654) 00:16:27.959 fused_ordering(655) 00:16:27.959 fused_ordering(656) 00:16:27.959 fused_ordering(657) 00:16:27.959 fused_ordering(658) 00:16:27.959 fused_ordering(659) 00:16:27.959 fused_ordering(660) 00:16:27.959 fused_ordering(661) 00:16:27.959 fused_ordering(662) 00:16:27.959 fused_ordering(663) 00:16:27.959 fused_ordering(664) 00:16:27.959 fused_ordering(665) 00:16:27.959 fused_ordering(666) 00:16:27.959 fused_ordering(667) 00:16:27.959 fused_ordering(668) 00:16:27.959 fused_ordering(669) 00:16:27.959 fused_ordering(670) 00:16:27.959 fused_ordering(671) 00:16:27.959 fused_ordering(672) 00:16:27.959 fused_ordering(673) 00:16:27.959 fused_ordering(674) 00:16:27.959 fused_ordering(675) 00:16:27.959 fused_ordering(676) 00:16:27.959 fused_ordering(677) 00:16:27.959 fused_ordering(678) 00:16:27.959 fused_ordering(679) 00:16:27.959 fused_ordering(680) 00:16:27.959 fused_ordering(681) 00:16:27.959 fused_ordering(682) 00:16:27.959 fused_ordering(683) 00:16:27.959 fused_ordering(684) 00:16:27.959 fused_ordering(685) 00:16:27.959 fused_ordering(686) 00:16:27.959 fused_ordering(687) 00:16:27.959 fused_ordering(688) 00:16:27.959 fused_ordering(689) 00:16:27.959 fused_ordering(690) 00:16:27.959 fused_ordering(691) 00:16:27.959 fused_ordering(692) 00:16:27.959 fused_ordering(693) 00:16:27.959 fused_ordering(694) 00:16:27.959 fused_ordering(695) 00:16:27.959 fused_ordering(696) 00:16:27.959 fused_ordering(697) 00:16:27.959 fused_ordering(698) 00:16:27.959 fused_ordering(699) 00:16:27.959 fused_ordering(700) 00:16:27.959 fused_ordering(701) 00:16:27.959 fused_ordering(702) 00:16:27.959 fused_ordering(703) 00:16:27.959 fused_ordering(704) 00:16:27.959 fused_ordering(705) 00:16:27.959 fused_ordering(706) 00:16:27.959 fused_ordering(707) 00:16:27.959 fused_ordering(708) 00:16:27.959 fused_ordering(709) 00:16:27.959 fused_ordering(710) 00:16:27.959 fused_ordering(711) 00:16:27.959 fused_ordering(712) 00:16:27.959 fused_ordering(713) 00:16:27.959 fused_ordering(714) 00:16:27.959 fused_ordering(715) 00:16:27.959 fused_ordering(716) 00:16:27.959 fused_ordering(717) 00:16:27.959 fused_ordering(718) 00:16:27.959 fused_ordering(719) 00:16:27.959 fused_ordering(720) 00:16:27.959 fused_ordering(721) 00:16:27.959 fused_ordering(722) 00:16:27.959 fused_ordering(723) 00:16:27.959 fused_ordering(724) 00:16:27.959 fused_ordering(725) 00:16:27.959 fused_ordering(726) 00:16:27.959 fused_ordering(727) 00:16:27.959 fused_ordering(728) 00:16:27.959 fused_ordering(729) 00:16:27.959 fused_ordering(730) 00:16:27.959 fused_ordering(731) 00:16:27.959 fused_ordering(732) 00:16:27.959 fused_ordering(733) 00:16:27.959 fused_ordering(734) 00:16:27.959 fused_ordering(735) 00:16:27.959 fused_ordering(736) 00:16:27.959 fused_ordering(737) 00:16:27.959 fused_ordering(738) 00:16:27.959 fused_ordering(739) 00:16:27.959 fused_ordering(740) 00:16:27.959 fused_ordering(741) 00:16:27.959 fused_ordering(742) 00:16:27.959 fused_ordering(743) 00:16:27.959 fused_ordering(744) 00:16:27.959 fused_ordering(745) 00:16:27.959 fused_ordering(746) 00:16:27.959 fused_ordering(747) 00:16:27.959 fused_ordering(748) 00:16:27.959 fused_ordering(749) 00:16:27.959 fused_ordering(750) 00:16:27.959 fused_ordering(751) 00:16:27.959 fused_ordering(752) 00:16:27.959 fused_ordering(753) 00:16:27.959 fused_ordering(754) 00:16:27.959 fused_ordering(755) 00:16:27.959 fused_ordering(756) 00:16:27.959 fused_ordering(757) 00:16:27.959 fused_ordering(758) 00:16:27.959 fused_ordering(759) 00:16:27.959 fused_ordering(760) 00:16:27.959 fused_ordering(761) 00:16:27.960 fused_ordering(762) 00:16:27.960 fused_ordering(763) 00:16:27.960 fused_ordering(764) 00:16:27.960 fused_ordering(765) 00:16:27.960 fused_ordering(766) 00:16:27.960 fused_ordering(767) 00:16:27.960 fused_ordering(768) 00:16:27.960 fused_ordering(769) 00:16:27.960 fused_ordering(770) 00:16:27.960 fused_ordering(771) 00:16:27.960 fused_ordering(772) 00:16:27.960 fused_ordering(773) 00:16:27.960 fused_ordering(774) 00:16:27.960 fused_ordering(775) 00:16:27.960 fused_ordering(776) 00:16:27.960 fused_ordering(777) 00:16:27.960 fused_ordering(778) 00:16:27.960 fused_ordering(779) 00:16:27.960 fused_ordering(780) 00:16:27.960 fused_ordering(781) 00:16:27.960 fused_ordering(782) 00:16:27.960 fused_ordering(783) 00:16:27.960 fused_ordering(784) 00:16:27.960 fused_ordering(785) 00:16:27.960 fused_ordering(786) 00:16:27.960 fused_ordering(787) 00:16:27.960 fused_ordering(788) 00:16:27.960 fused_ordering(789) 00:16:27.960 fused_ordering(790) 00:16:27.960 fused_ordering(791) 00:16:27.960 fused_ordering(792) 00:16:27.960 fused_ordering(793) 00:16:27.960 fused_ordering(794) 00:16:27.960 fused_ordering(795) 00:16:27.960 fused_ordering(796) 00:16:27.960 fused_ordering(797) 00:16:27.960 fused_ordering(798) 00:16:27.960 fused_ordering(799) 00:16:27.960 fused_ordering(800) 00:16:27.960 fused_ordering(801) 00:16:27.960 fused_ordering(802) 00:16:27.960 fused_ordering(803) 00:16:27.960 fused_ordering(804) 00:16:27.960 fused_ordering(805) 00:16:27.960 fused_ordering(806) 00:16:27.960 fused_ordering(807) 00:16:27.960 fused_ordering(808) 00:16:27.960 fused_ordering(809) 00:16:27.960 fused_ordering(810) 00:16:27.960 fused_ordering(811) 00:16:27.960 fused_ordering(812) 00:16:27.960 fused_ordering(813) 00:16:27.960 fused_ordering(814) 00:16:27.960 fused_ordering(815) 00:16:27.960 fused_ordering(816) 00:16:27.960 fused_ordering(817) 00:16:27.960 fused_ordering(818) 00:16:27.960 fused_ordering(819) 00:16:27.960 fused_ordering(820) 00:16:28.528 fused_ordering(821) 00:16:28.528 fused_ordering(822) 00:16:28.528 fused_ordering(823) 00:16:28.528 fused_ordering(824) 00:16:28.528 fused_ordering(825) 00:16:28.528 fused_ordering(826) 00:16:28.528 fused_ordering(827) 00:16:28.528 fused_ordering(828) 00:16:28.528 fused_ordering(829) 00:16:28.528 fused_ordering(830) 00:16:28.528 fused_ordering(831) 00:16:28.528 fused_ordering(832) 00:16:28.528 fused_ordering(833) 00:16:28.528 fused_ordering(834) 00:16:28.528 fused_ordering(835) 00:16:28.528 fused_ordering(836) 00:16:28.528 fused_ordering(837) 00:16:28.528 fused_ordering(838) 00:16:28.528 fused_ordering(839) 00:16:28.528 fused_ordering(840) 00:16:28.528 fused_ordering(841) 00:16:28.528 fused_ordering(842) 00:16:28.528 fused_ordering(843) 00:16:28.528 fused_ordering(844) 00:16:28.528 fused_ordering(845) 00:16:28.528 fused_ordering(846) 00:16:28.528 fused_ordering(847) 00:16:28.528 fused_ordering(848) 00:16:28.528 fused_ordering(849) 00:16:28.528 fused_ordering(850) 00:16:28.528 fused_ordering(851) 00:16:28.528 fused_ordering(852) 00:16:28.528 fused_ordering(853) 00:16:28.528 fused_ordering(854) 00:16:28.528 fused_ordering(855) 00:16:28.528 fused_ordering(856) 00:16:28.528 fused_ordering(857) 00:16:28.528 fused_ordering(858) 00:16:28.528 fused_ordering(859) 00:16:28.528 fused_ordering(860) 00:16:28.528 fused_ordering(861) 00:16:28.528 fused_ordering(862) 00:16:28.528 fused_ordering(863) 00:16:28.528 fused_ordering(864) 00:16:28.528 fused_ordering(865) 00:16:28.528 fused_ordering(866) 00:16:28.528 fused_ordering(867) 00:16:28.528 fused_ordering(868) 00:16:28.528 fused_ordering(869) 00:16:28.528 fused_ordering(870) 00:16:28.529 fused_ordering(871) 00:16:28.529 fused_ordering(872) 00:16:28.529 fused_ordering(873) 00:16:28.529 fused_ordering(874) 00:16:28.529 fused_ordering(875) 00:16:28.529 fused_ordering(876) 00:16:28.529 fused_ordering(877) 00:16:28.529 fused_ordering(878) 00:16:28.529 fused_ordering(879) 00:16:28.529 fused_ordering(880) 00:16:28.529 fused_ordering(881) 00:16:28.529 fused_ordering(882) 00:16:28.529 fused_ordering(883) 00:16:28.529 fused_ordering(884) 00:16:28.529 fused_ordering(885) 00:16:28.529 fused_ordering(886) 00:16:28.529 fused_ordering(887) 00:16:28.529 fused_ordering(888) 00:16:28.529 fused_ordering(889) 00:16:28.529 fused_ordering(890) 00:16:28.529 fused_ordering(891) 00:16:28.529 fused_ordering(892) 00:16:28.529 fused_ordering(893) 00:16:28.529 fused_ordering(894) 00:16:28.529 fused_ordering(895) 00:16:28.529 fused_ordering(896) 00:16:28.529 fused_ordering(897) 00:16:28.529 fused_ordering(898) 00:16:28.529 fused_ordering(899) 00:16:28.529 fused_ordering(900) 00:16:28.529 fused_ordering(901) 00:16:28.529 fused_ordering(902) 00:16:28.529 fused_ordering(903) 00:16:28.529 fused_ordering(904) 00:16:28.529 fused_ordering(905) 00:16:28.529 fused_ordering(906) 00:16:28.529 fused_ordering(907) 00:16:28.529 fused_ordering(908) 00:16:28.529 fused_ordering(909) 00:16:28.529 fused_ordering(910) 00:16:28.529 fused_ordering(911) 00:16:28.529 fused_ordering(912) 00:16:28.529 fused_ordering(913) 00:16:28.529 fused_ordering(914) 00:16:28.529 fused_ordering(915) 00:16:28.529 fused_ordering(916) 00:16:28.529 fused_ordering(917) 00:16:28.529 fused_ordering(918) 00:16:28.529 fused_ordering(919) 00:16:28.529 fused_ordering(920) 00:16:28.529 fused_ordering(921) 00:16:28.529 fused_ordering(922) 00:16:28.529 fused_ordering(923) 00:16:28.529 fused_ordering(924) 00:16:28.529 fused_ordering(925) 00:16:28.529 fused_ordering(926) 00:16:28.529 fused_ordering(927) 00:16:28.529 fused_ordering(928) 00:16:28.529 fused_ordering(929) 00:16:28.529 fused_ordering(930) 00:16:28.529 fused_ordering(931) 00:16:28.529 fused_ordering(932) 00:16:28.529 fused_ordering(933) 00:16:28.529 fused_ordering(934) 00:16:28.529 fused_ordering(935) 00:16:28.529 fused_ordering(936) 00:16:28.529 fused_ordering(937) 00:16:28.529 fused_ordering(938) 00:16:28.529 fused_ordering(939) 00:16:28.529 fused_ordering(940) 00:16:28.529 fused_ordering(941) 00:16:28.529 fused_ordering(942) 00:16:28.529 fused_ordering(943) 00:16:28.529 fused_ordering(944) 00:16:28.529 fused_ordering(945) 00:16:28.529 fused_ordering(946) 00:16:28.529 fused_ordering(947) 00:16:28.529 fused_ordering(948) 00:16:28.529 fused_ordering(949) 00:16:28.529 fused_ordering(950) 00:16:28.529 fused_ordering(951) 00:16:28.529 fused_ordering(952) 00:16:28.529 fused_ordering(953) 00:16:28.529 fused_ordering(954) 00:16:28.529 fused_ordering(955) 00:16:28.529 fused_ordering(956) 00:16:28.529 fused_ordering(957) 00:16:28.529 fused_ordering(958) 00:16:28.529 fused_ordering(959) 00:16:28.529 fused_ordering(960) 00:16:28.529 fused_ordering(961) 00:16:28.529 fused_ordering(962) 00:16:28.529 fused_ordering(963) 00:16:28.529 fused_ordering(964) 00:16:28.529 fused_ordering(965) 00:16:28.529 fused_ordering(966) 00:16:28.529 fused_ordering(967) 00:16:28.529 fused_ordering(968) 00:16:28.529 fused_ordering(969) 00:16:28.529 fused_ordering(970) 00:16:28.529 fused_ordering(971) 00:16:28.529 fused_ordering(972) 00:16:28.529 fused_ordering(973) 00:16:28.529 fused_ordering(974) 00:16:28.529 fused_ordering(975) 00:16:28.529 fused_ordering(976) 00:16:28.529 fused_ordering(977) 00:16:28.529 fused_ordering(978) 00:16:28.529 fused_ordering(979) 00:16:28.529 fused_ordering(980) 00:16:28.529 fused_ordering(981) 00:16:28.529 fused_ordering(982) 00:16:28.529 fused_ordering(983) 00:16:28.529 fused_ordering(984) 00:16:28.529 fused_ordering(985) 00:16:28.529 fused_ordering(986) 00:16:28.529 fused_ordering(987) 00:16:28.529 fused_ordering(988) 00:16:28.529 fused_ordering(989) 00:16:28.529 fused_ordering(990) 00:16:28.529 fused_ordering(991) 00:16:28.529 fused_ordering(992) 00:16:28.529 fused_ordering(993) 00:16:28.529 fused_ordering(994) 00:16:28.529 fused_ordering(995) 00:16:28.529 fused_ordering(996) 00:16:28.529 fused_ordering(997) 00:16:28.529 fused_ordering(998) 00:16:28.529 fused_ordering(999) 00:16:28.529 fused_ordering(1000) 00:16:28.529 fused_ordering(1001) 00:16:28.529 fused_ordering(1002) 00:16:28.529 fused_ordering(1003) 00:16:28.529 fused_ordering(1004) 00:16:28.529 fused_ordering(1005) 00:16:28.529 fused_ordering(1006) 00:16:28.529 fused_ordering(1007) 00:16:28.529 fused_ordering(1008) 00:16:28.529 fused_ordering(1009) 00:16:28.529 fused_ordering(1010) 00:16:28.529 fused_ordering(1011) 00:16:28.529 fused_ordering(1012) 00:16:28.529 fused_ordering(1013) 00:16:28.529 fused_ordering(1014) 00:16:28.529 fused_ordering(1015) 00:16:28.529 fused_ordering(1016) 00:16:28.529 fused_ordering(1017) 00:16:28.529 fused_ordering(1018) 00:16:28.529 fused_ordering(1019) 00:16:28.529 fused_ordering(1020) 00:16:28.529 fused_ordering(1021) 00:16:28.529 fused_ordering(1022) 00:16:28.529 fused_ordering(1023) 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:28.529 rmmod nvme_tcp 00:16:28.529 rmmod nvme_fabrics 00:16:28.529 rmmod nvme_keyring 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2251813 ']' 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2251813 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2251813 ']' 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2251813 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:28.529 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2251813 00:16:28.788 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:28.788 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:28.788 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2251813' 00:16:28.788 killing process with pid 2251813 00:16:28.788 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2251813 00:16:28.788 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2251813 00:16:28.788 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:28.788 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:28.788 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:28.788 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.788 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:28.788 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.788 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.788 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:31.326 00:16:31.326 real 0m7.504s 00:16:31.326 user 0m5.122s 00:16:31.326 sys 0m3.315s 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:31.326 ************************************ 00:16:31.326 END TEST nvmf_fused_ordering 00:16:31.326 ************************************ 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:31.326 ************************************ 00:16:31.326 START TEST nvmf_ns_masking 00:16:31.326 ************************************ 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:31.326 * Looking for test storage... 00:16:31.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.326 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=1ee07ed3-0c09-4f4a-8a15-a8c9e19027cd 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=7a22f127-f742-4913-80d8-6f4a29b6ae9f 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=9a0a38d6-8e65-490a-ae49-caccb9770603 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:31.327 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:33.230 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:33.230 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:33.230 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:33.230 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:33.230 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:33.230 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:33.230 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:33.230 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:33.230 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:33.230 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:33.230 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:33.230 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:33.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:16:33.230 00:16:33.230 --- 10.0.0.2 ping statistics --- 00:16:33.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.230 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:16:33.230 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:33.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:16:33.231 00:16:33.231 --- 10.0.0.1 ping statistics --- 00:16:33.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.231 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2254164 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2254164 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2254164 ']' 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:33.231 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:33.231 [2024-07-26 01:52:15.189858] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:16:33.231 [2024-07-26 01:52:15.189943] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.231 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.489 [2024-07-26 01:52:15.259980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.489 [2024-07-26 01:52:15.353789] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.489 [2024-07-26 01:52:15.353856] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.489 [2024-07-26 01:52:15.353883] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.489 [2024-07-26 01:52:15.353897] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.489 [2024-07-26 01:52:15.353908] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.489 [2024-07-26 01:52:15.353949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.489 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:33.489 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:33.489 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:33.489 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:33.489 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:33.489 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.489 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:34.055 [2024-07-26 01:52:15.761715] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.055 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:34.055 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:34.055 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:34.055 Malloc1 00:16:34.055 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:34.313 Malloc2 00:16:34.313 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:34.571 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:34.829 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.089 [2024-07-26 01:52:17.015090] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.089 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:35.089 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9a0a38d6-8e65-490a-ae49-caccb9770603 -a 10.0.0.2 -s 4420 -i 4 00:16:35.348 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:35.348 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:35.348 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:35.348 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:35.348 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:37.269 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:37.269 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:37.269 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:37.269 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:37.269 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:37.270 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:37.270 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:37.270 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:37.527 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:37.527 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:37.527 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:37.527 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:37.527 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:37.527 [ 0]:0x1 00:16:37.527 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:37.527 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:37.527 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d173cf6ed4b74c83b6c0c201dd35f853 00:16:37.527 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d173cf6ed4b74c83b6c0c201dd35f853 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:37.527 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:37.784 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:37.784 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:37.784 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:37.784 [ 0]:0x1 00:16:37.784 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:37.784 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:37.784 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d173cf6ed4b74c83b6c0c201dd35f853 00:16:37.784 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d173cf6ed4b74c83b6c0c201dd35f853 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:37.784 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:37.784 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:37.784 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:37.784 [ 1]:0x2 00:16:37.785 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:37.785 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:37.785 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dfa41a99104e4f92a45328f182b8fa9e 00:16:37.785 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dfa41a99104e4f92a45328f182b8fa9e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:37.785 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:37.785 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:38.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.042 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.299 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:38.557 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:38.557 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9a0a38d6-8e65-490a-ae49-caccb9770603 -a 10.0.0.2 -s 4420 -i 4 00:16:38.557 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:38.557 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:38.557 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.557 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:38.557 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:38.557 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:41.092 [ 0]:0x2 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dfa41a99104e4f92a45328f182b8fa9e 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dfa41a99104e4f92a45328f182b8fa9e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.092 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:41.092 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:41.349 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.349 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:41.349 [ 0]:0x1 00:16:41.349 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:41.349 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.349 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d173cf6ed4b74c83b6c0c201dd35f853 00:16:41.349 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d173cf6ed4b74c83b6c0c201dd35f853 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.349 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:41.349 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.349 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:41.349 [ 1]:0x2 00:16:41.349 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:41.349 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.349 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dfa41a99104e4f92a45328f182b8fa9e 00:16:41.349 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dfa41a99104e4f92a45328f182b8fa9e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.349 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:41.607 [ 0]:0x2 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dfa41a99104e4f92a45328f182b8fa9e 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dfa41a99104e4f92a45328f182b8fa9e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:41.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.607 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:41.865 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:41.865 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9a0a38d6-8e65-490a-ae49-caccb9770603 -a 10.0.0.2 -s 4420 -i 4 00:16:42.124 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:42.124 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:42.124 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:42.124 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:42.124 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:42.124 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:44.027 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:44.027 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:44.027 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:44.027 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:44.027 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:44.027 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:44.027 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:44.027 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:44.285 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:44.285 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:44.285 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:44.285 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:44.285 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:44.286 [ 0]:0x1 00:16:44.286 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:44.286 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:44.286 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d173cf6ed4b74c83b6c0c201dd35f853 00:16:44.286 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d173cf6ed4b74c83b6c0c201dd35f853 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:44.286 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:44.286 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:44.286 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:44.286 [ 1]:0x2 00:16:44.286 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:44.286 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:44.286 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dfa41a99104e4f92a45328f182b8fa9e 00:16:44.286 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dfa41a99104e4f92a45328f182b8fa9e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:44.286 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:44.851 [ 0]:0x2 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dfa41a99104e4f92a45328f182b8fa9e 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dfa41a99104e4f92a45328f182b8fa9e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:44.851 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:44.852 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:44.852 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.852 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.852 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.852 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.852 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.852 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.852 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.852 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:44.852 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:45.111 [2024-07-26 01:52:26.889242] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:45.111 request: 00:16:45.111 { 00:16:45.111 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.111 "nsid": 2, 00:16:45.111 "host": "nqn.2016-06.io.spdk:host1", 00:16:45.111 "method": "nvmf_ns_remove_host", 00:16:45.111 "req_id": 1 00:16:45.111 } 00:16:45.111 Got JSON-RPC error response 00:16:45.111 response: 00:16:45.111 { 00:16:45.111 "code": -32602, 00:16:45.111 "message": "Invalid parameters" 00:16:45.111 } 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:45.111 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:45.111 [ 0]:0x2 00:16:45.111 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:45.111 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:45.111 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dfa41a99104e4f92a45328f182b8fa9e 00:16:45.111 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dfa41a99104e4f92a45328f182b8fa9e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.111 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:45.111 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:45.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.370 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2255796 00:16:45.370 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:45.370 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.370 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2255796 /var/tmp/host.sock 00:16:45.370 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2255796 ']' 00:16:45.370 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:45.370 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:45.370 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:45.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:45.370 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:45.370 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:45.370 [2024-07-26 01:52:27.232164] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:16:45.370 [2024-07-26 01:52:27.232263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255796 ] 00:16:45.370 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.370 [2024-07-26 01:52:27.296558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.629 [2024-07-26 01:52:27.389604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.886 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.886 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:45.886 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:46.144 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:46.404 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 1ee07ed3-0c09-4f4a-8a15-a8c9e19027cd 00:16:46.404 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:46.404 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 1EE07ED30C094F4A8A15A8C9E19027CD -i 00:16:46.662 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 7a22f127-f742-4913-80d8-6f4a29b6ae9f 00:16:46.662 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:46.662 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 7A22F127F742491380D86F4A29B6AE9F -i 00:16:46.662 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:46.920 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:47.178 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:47.178 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:47.765 nvme0n1 00:16:47.765 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:47.765 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:48.332 nvme1n2 00:16:48.332 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:48.332 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:48.332 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:48.332 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:48.332 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:48.590 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:48.590 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:48.590 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:48.590 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:48.848 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 1ee07ed3-0c09-4f4a-8a15-a8c9e19027cd == \1\e\e\0\7\e\d\3\-\0\c\0\9\-\4\f\4\a\-\8\a\1\5\-\a\8\c\9\e\1\9\0\2\7\c\d ]] 00:16:48.848 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:48.848 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:48.848 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:49.107 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 7a22f127-f742-4913-80d8-6f4a29b6ae9f == \7\a\2\2\f\1\2\7\-\f\7\4\2\-\4\9\1\3\-\8\0\d\8\-\6\f\4\a\2\9\b\6\a\e\9\f ]] 00:16:49.107 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2255796 00:16:49.107 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2255796 ']' 00:16:49.107 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2255796 00:16:49.107 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:49.107 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.107 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2255796 00:16:49.107 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:49.107 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:49.107 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2255796' 00:16:49.107 killing process with pid 2255796 00:16:49.107 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2255796 00:16:49.107 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2255796 00:16:49.365 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.623 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:49.623 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:49.623 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:49.623 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:49.623 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:49.623 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:49.623 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:49.623 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:49.884 rmmod nvme_tcp 00:16:49.884 rmmod nvme_fabrics 00:16:49.884 rmmod nvme_keyring 00:16:49.884 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:49.884 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:49.884 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:49.884 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2254164 ']' 00:16:49.884 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2254164 00:16:49.884 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2254164 ']' 00:16:49.884 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2254164 00:16:49.884 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:49.884 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.884 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2254164 00:16:49.884 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:49.884 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:49.884 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2254164' 00:16:49.884 killing process with pid 2254164 00:16:49.884 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2254164 00:16:49.884 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2254164 00:16:50.143 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:50.143 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:50.143 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:50.143 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:50.143 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:50.143 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.143 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.143 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.059 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:52.059 00:16:52.059 real 0m21.210s 00:16:52.059 user 0m27.372s 00:16:52.059 sys 0m4.240s 00:16:52.059 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.059 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:52.059 ************************************ 00:16:52.059 END TEST nvmf_ns_masking 00:16:52.059 ************************************ 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:52.318 ************************************ 00:16:52.318 START TEST nvmf_nvme_cli 00:16:52.318 ************************************ 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:52.318 * Looking for test storage... 00:16:52.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:52.318 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.222 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:54.222 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:54.222 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:54.222 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:54.222 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:54.222 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:54.222 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:54.222 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:54.222 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:54.223 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:54.223 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:54.223 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:54.223 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:54.223 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:54.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:16:54.481 00:16:54.481 --- 10.0.0.2 ping statistics --- 00:16:54.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.481 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:16:54.481 00:16:54.481 --- 10.0.0.1 ping statistics --- 00:16:54.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.481 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2258288 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2258288 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2258288 ']' 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:54.481 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.481 [2024-07-26 01:52:36.346613] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:16:54.481 [2024-07-26 01:52:36.346700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.481 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.481 [2024-07-26 01:52:36.410180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:54.741 [2024-07-26 01:52:36.501370] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.741 [2024-07-26 01:52:36.501444] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.741 [2024-07-26 01:52:36.501471] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.741 [2024-07-26 01:52:36.501485] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.741 [2024-07-26 01:52:36.501497] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.741 [2024-07-26 01:52:36.501579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.741 [2024-07-26 01:52:36.501650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.741 [2024-07-26 01:52:36.501742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.741 [2024-07-26 01:52:36.501744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.741 [2024-07-26 01:52:36.653545] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.741 Malloc0 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.741 Malloc1 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.741 [2024-07-26 01:52:36.739533] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.741 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:16:54.999 00:16:54.999 Discovery Log Number of Records 2, Generation counter 2 00:16:54.999 =====Discovery Log Entry 0====== 00:16:54.999 trtype: tcp 00:16:54.999 adrfam: ipv4 00:16:54.999 subtype: current discovery subsystem 00:16:54.999 treq: not required 00:16:54.999 portid: 0 00:16:54.999 trsvcid: 4420 00:16:54.999 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:54.999 traddr: 10.0.0.2 00:16:54.999 eflags: explicit discovery connections, duplicate discovery information 00:16:54.999 sectype: none 00:16:54.999 =====Discovery Log Entry 1====== 00:16:54.999 trtype: tcp 00:16:54.999 adrfam: ipv4 00:16:54.999 subtype: nvme subsystem 00:16:54.999 treq: not required 00:16:54.999 portid: 0 00:16:54.999 trsvcid: 4420 00:16:54.999 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:54.999 traddr: 10.0.0.2 00:16:54.999 eflags: none 00:16:54.999 sectype: none 00:16:54.999 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:54.999 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:54.999 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:54.999 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:54.999 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:54.999 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:54.999 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:54.999 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:54.999 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:54.999 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:54.999 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:55.566 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:55.566 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:55.566 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:55.566 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:55.566 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:55.566 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:57.471 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:57.471 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:57.471 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:57.730 /dev/nvme0n1 ]] 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:57.730 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:57.731 rmmod nvme_tcp 00:16:57.731 rmmod nvme_fabrics 00:16:57.731 rmmod nvme_keyring 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2258288 ']' 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2258288 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2258288 ']' 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2258288 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2258288 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2258288' 00:16:57.731 killing process with pid 2258288 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2258288 00:16:57.731 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2258288 00:16:57.991 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:58.251 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:58.251 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:58.251 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.251 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:58.251 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.251 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.251 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:00.154 00:17:00.154 real 0m7.928s 00:17:00.154 user 0m14.364s 00:17:00.154 sys 0m2.143s 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:00.154 ************************************ 00:17:00.154 END TEST nvmf_nvme_cli 00:17:00.154 ************************************ 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:00.154 ************************************ 00:17:00.154 START TEST nvmf_vfio_user 00:17:00.154 ************************************ 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:00.154 * Looking for test storage... 00:17:00.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.154 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2259091 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2259091' 00:17:00.415 Process pid: 2259091 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2259091 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2259091 ']' 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:00.415 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:00.415 [2024-07-26 01:52:42.214673] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:17:00.415 [2024-07-26 01:52:42.214757] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.415 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.415 [2024-07-26 01:52:42.277899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:00.415 [2024-07-26 01:52:42.365146] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.415 [2024-07-26 01:52:42.365197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.415 [2024-07-26 01:52:42.365226] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.415 [2024-07-26 01:52:42.365238] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.415 [2024-07-26 01:52:42.365248] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.415 [2024-07-26 01:52:42.365315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.415 [2024-07-26 01:52:42.369078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.415 [2024-07-26 01:52:42.369148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:00.415 [2024-07-26 01:52:42.369152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.674 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:00.674 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:00.674 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:01.612 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:01.870 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:01.870 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:01.870 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:01.870 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:01.870 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:02.128 Malloc1 00:17:02.128 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:02.418 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:02.675 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:02.931 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:02.931 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:02.931 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:03.187 Malloc2 00:17:03.187 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:03.445 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:03.701 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:03.967 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:03.967 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:03.967 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:03.967 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:03.967 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:03.967 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:03.967 [2024-07-26 01:52:45.818036] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:17:03.967 [2024-07-26 01:52:45.818135] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2259510 ] 00:17:03.967 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.967 [2024-07-26 01:52:45.853342] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:03.967 [2024-07-26 01:52:45.861511] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:03.967 [2024-07-26 01:52:45.861543] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fadb5cc5000 00:17:03.967 [2024-07-26 01:52:45.862504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:03.967 [2024-07-26 01:52:45.863503] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:03.967 [2024-07-26 01:52:45.864509] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:03.967 [2024-07-26 01:52:45.865515] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:03.967 [2024-07-26 01:52:45.866523] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:03.967 [2024-07-26 01:52:45.867524] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:03.967 [2024-07-26 01:52:45.868528] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:03.967 [2024-07-26 01:52:45.869531] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:03.967 [2024-07-26 01:52:45.870542] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:03.967 [2024-07-26 01:52:45.870561] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fadb4a79000 00:17:03.967 [2024-07-26 01:52:45.871678] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:03.967 [2024-07-26 01:52:45.887661] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:03.967 [2024-07-26 01:52:45.887706] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:03.967 [2024-07-26 01:52:45.892652] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:03.967 [2024-07-26 01:52:45.892706] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:03.967 [2024-07-26 01:52:45.892803] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:03.967 [2024-07-26 01:52:45.892835] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:03.967 [2024-07-26 01:52:45.892845] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:03.967 [2024-07-26 01:52:45.893653] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:03.967 [2024-07-26 01:52:45.893679] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:03.967 [2024-07-26 01:52:45.893692] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:03.967 [2024-07-26 01:52:45.894651] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:03.967 [2024-07-26 01:52:45.894670] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:03.967 [2024-07-26 01:52:45.894683] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:03.967 [2024-07-26 01:52:45.895651] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:03.967 [2024-07-26 01:52:45.895669] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:03.967 [2024-07-26 01:52:45.896655] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:03.967 [2024-07-26 01:52:45.896675] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:03.967 [2024-07-26 01:52:45.896683] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:03.967 [2024-07-26 01:52:45.896694] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:03.967 [2024-07-26 01:52:45.896805] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:03.967 [2024-07-26 01:52:45.896814] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:03.967 [2024-07-26 01:52:45.896822] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:03.967 [2024-07-26 01:52:45.897664] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:03.967 [2024-07-26 01:52:45.898666] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:03.967 [2024-07-26 01:52:45.899675] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:03.967 [2024-07-26 01:52:45.900669] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:03.967 [2024-07-26 01:52:45.900799] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:03.967 [2024-07-26 01:52:45.901684] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:03.967 [2024-07-26 01:52:45.901702] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:03.967 [2024-07-26 01:52:45.901710] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.901734] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:03.967 [2024-07-26 01:52:45.901753] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.901781] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:03.967 [2024-07-26 01:52:45.901790] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:03.967 [2024-07-26 01:52:45.901797] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:03.967 [2024-07-26 01:52:45.901822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:03.967 [2024-07-26 01:52:45.901879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:03.967 [2024-07-26 01:52:45.901897] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:03.967 [2024-07-26 01:52:45.901906] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:03.967 [2024-07-26 01:52:45.901913] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:03.967 [2024-07-26 01:52:45.901921] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:03.967 [2024-07-26 01:52:45.901930] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:03.967 [2024-07-26 01:52:45.901938] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:03.967 [2024-07-26 01:52:45.901946] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.901959] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.901978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:03.967 [2024-07-26 01:52:45.901998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:03.967 [2024-07-26 01:52:45.902019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.967 [2024-07-26 01:52:45.902033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.967 [2024-07-26 01:52:45.902068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.967 [2024-07-26 01:52:45.902082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.967 [2024-07-26 01:52:45.902091] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902108] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902123] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:03.967 [2024-07-26 01:52:45.902136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:03.967 [2024-07-26 01:52:45.902147] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:03.967 [2024-07-26 01:52:45.902156] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902174] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902186] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902199] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:03.967 [2024-07-26 01:52:45.902215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:03.967 [2024-07-26 01:52:45.902283] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902300] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902314] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:03.967 [2024-07-26 01:52:45.902323] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:03.967 [2024-07-26 01:52:45.902329] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:03.967 [2024-07-26 01:52:45.902338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:03.967 [2024-07-26 01:52:45.902372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:03.967 [2024-07-26 01:52:45.902389] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:03.967 [2024-07-26 01:52:45.902405] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902419] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902430] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:03.967 [2024-07-26 01:52:45.902438] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:03.967 [2024-07-26 01:52:45.902443] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:03.967 [2024-07-26 01:52:45.902452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:03.967 [2024-07-26 01:52:45.902474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:03.967 [2024-07-26 01:52:45.902497] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902512] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902523] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:03.967 [2024-07-26 01:52:45.902531] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:03.967 [2024-07-26 01:52:45.902536] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:03.967 [2024-07-26 01:52:45.902545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:03.967 [2024-07-26 01:52:45.902556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:03.967 [2024-07-26 01:52:45.902569] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902580] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902593] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902609] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902618] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902627] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902635] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:03.967 [2024-07-26 01:52:45.902643] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:03.967 [2024-07-26 01:52:45.902651] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:03.967 [2024-07-26 01:52:45.902677] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:03.967 [2024-07-26 01:52:45.902695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:03.967 [2024-07-26 01:52:45.902714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:03.967 [2024-07-26 01:52:45.902725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:03.967 [2024-07-26 01:52:45.902740] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:03.967 [2024-07-26 01:52:45.902751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:03.967 [2024-07-26 01:52:45.902766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:03.967 [2024-07-26 01:52:45.902777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:03.967 [2024-07-26 01:52:45.902798] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:03.967 [2024-07-26 01:52:45.902808] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:03.968 [2024-07-26 01:52:45.902814] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:03.968 [2024-07-26 01:52:45.902820] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:03.968 [2024-07-26 01:52:45.902825] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:03.968 [2024-07-26 01:52:45.902834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:03.968 [2024-07-26 01:52:45.902845] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:03.968 [2024-07-26 01:52:45.902853] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:03.968 [2024-07-26 01:52:45.902859] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:03.968 [2024-07-26 01:52:45.902867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:03.968 [2024-07-26 01:52:45.902878] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:03.968 [2024-07-26 01:52:45.902885] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:03.968 [2024-07-26 01:52:45.902891] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:03.968 [2024-07-26 01:52:45.902900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:03.968 [2024-07-26 01:52:45.902916] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:03.968 [2024-07-26 01:52:45.902924] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:03.968 [2024-07-26 01:52:45.902929] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:03.968 [2024-07-26 01:52:45.902938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:03.968 [2024-07-26 01:52:45.902949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:03.968 [2024-07-26 01:52:45.902968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:03.968 [2024-07-26 01:52:45.902984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:03.968 [2024-07-26 01:52:45.902995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:03.968 ===================================================== 00:17:03.968 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:03.968 ===================================================== 00:17:03.968 Controller Capabilities/Features 00:17:03.968 ================================ 00:17:03.968 Vendor ID: 4e58 00:17:03.968 Subsystem Vendor ID: 4e58 00:17:03.968 Serial Number: SPDK1 00:17:03.968 Model Number: SPDK bdev Controller 00:17:03.968 Firmware Version: 24.09 00:17:03.968 Recommended Arb Burst: 6 00:17:03.968 IEEE OUI Identifier: 8d 6b 50 00:17:03.968 Multi-path I/O 00:17:03.968 May have multiple subsystem ports: Yes 00:17:03.968 May have multiple controllers: Yes 00:17:03.968 Associated with SR-IOV VF: No 00:17:03.968 Max Data Transfer Size: 131072 00:17:03.968 Max Number of Namespaces: 32 00:17:03.968 Max Number of I/O Queues: 127 00:17:03.968 NVMe Specification Version (VS): 1.3 00:17:03.968 NVMe Specification Version (Identify): 1.3 00:17:03.968 Maximum Queue Entries: 256 00:17:03.968 Contiguous Queues Required: Yes 00:17:03.968 Arbitration Mechanisms Supported 00:17:03.968 Weighted Round Robin: Not Supported 00:17:03.968 Vendor Specific: Not Supported 00:17:03.968 Reset Timeout: 15000 ms 00:17:03.968 Doorbell Stride: 4 bytes 00:17:03.968 NVM Subsystem Reset: Not Supported 00:17:03.968 Command Sets Supported 00:17:03.968 NVM Command Set: Supported 00:17:03.968 Boot Partition: Not Supported 00:17:03.968 Memory Page Size Minimum: 4096 bytes 00:17:03.968 Memory Page Size Maximum: 4096 bytes 00:17:03.968 Persistent Memory Region: Not Supported 00:17:03.968 Optional Asynchronous Events Supported 00:17:03.968 Namespace Attribute Notices: Supported 00:17:03.968 Firmware Activation Notices: Not Supported 00:17:03.968 ANA Change Notices: Not Supported 00:17:03.968 PLE Aggregate Log Change Notices: Not Supported 00:17:03.968 LBA Status Info Alert Notices: Not Supported 00:17:03.968 EGE Aggregate Log Change Notices: Not Supported 00:17:03.968 Normal NVM Subsystem Shutdown event: Not Supported 00:17:03.968 Zone Descriptor Change Notices: Not Supported 00:17:03.968 Discovery Log Change Notices: Not Supported 00:17:03.968 Controller Attributes 00:17:03.968 128-bit Host Identifier: Supported 00:17:03.968 Non-Operational Permissive Mode: Not Supported 00:17:03.968 NVM Sets: Not Supported 00:17:03.968 Read Recovery Levels: Not Supported 00:17:03.968 Endurance Groups: Not Supported 00:17:03.968 Predictable Latency Mode: Not Supported 00:17:03.968 Traffic Based Keep ALive: Not Supported 00:17:03.968 Namespace Granularity: Not Supported 00:17:03.968 SQ Associations: Not Supported 00:17:03.968 UUID List: Not Supported 00:17:03.968 Multi-Domain Subsystem: Not Supported 00:17:03.968 Fixed Capacity Management: Not Supported 00:17:03.968 Variable Capacity Management: Not Supported 00:17:03.968 Delete Endurance Group: Not Supported 00:17:03.968 Delete NVM Set: Not Supported 00:17:03.968 Extended LBA Formats Supported: Not Supported 00:17:03.968 Flexible Data Placement Supported: Not Supported 00:17:03.968 00:17:03.968 Controller Memory Buffer Support 00:17:03.968 ================================ 00:17:03.968 Supported: No 00:17:03.968 00:17:03.968 Persistent Memory Region Support 00:17:03.968 ================================ 00:17:03.968 Supported: No 00:17:03.968 00:17:03.968 Admin Command Set Attributes 00:17:03.968 ============================ 00:17:03.968 Security Send/Receive: Not Supported 00:17:03.968 Format NVM: Not Supported 00:17:03.968 Firmware Activate/Download: Not Supported 00:17:03.968 Namespace Management: Not Supported 00:17:03.968 Device Self-Test: Not Supported 00:17:03.968 Directives: Not Supported 00:17:03.968 NVMe-MI: Not Supported 00:17:03.968 Virtualization Management: Not Supported 00:17:03.968 Doorbell Buffer Config: Not Supported 00:17:03.968 Get LBA Status Capability: Not Supported 00:17:03.968 Command & Feature Lockdown Capability: Not Supported 00:17:03.968 Abort Command Limit: 4 00:17:03.968 Async Event Request Limit: 4 00:17:03.968 Number of Firmware Slots: N/A 00:17:03.968 Firmware Slot 1 Read-Only: N/A 00:17:03.968 Firmware Activation Without Reset: N/A 00:17:03.968 Multiple Update Detection Support: N/A 00:17:03.968 Firmware Update Granularity: No Information Provided 00:17:03.968 Per-Namespace SMART Log: No 00:17:03.968 Asymmetric Namespace Access Log Page: Not Supported 00:17:03.968 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:03.968 Command Effects Log Page: Supported 00:17:03.968 Get Log Page Extended Data: Supported 00:17:03.968 Telemetry Log Pages: Not Supported 00:17:03.968 Persistent Event Log Pages: Not Supported 00:17:03.968 Supported Log Pages Log Page: May Support 00:17:03.968 Commands Supported & Effects Log Page: Not Supported 00:17:03.968 Feature Identifiers & Effects Log Page:May Support 00:17:03.968 NVMe-MI Commands & Effects Log Page: May Support 00:17:03.968 Data Area 4 for Telemetry Log: Not Supported 00:17:03.968 Error Log Page Entries Supported: 128 00:17:03.968 Keep Alive: Supported 00:17:03.968 Keep Alive Granularity: 10000 ms 00:17:03.968 00:17:03.968 NVM Command Set Attributes 00:17:03.968 ========================== 00:17:03.968 Submission Queue Entry Size 00:17:03.968 Max: 64 00:17:03.968 Min: 64 00:17:03.968 Completion Queue Entry Size 00:17:03.968 Max: 16 00:17:03.968 Min: 16 00:17:03.968 Number of Namespaces: 32 00:17:03.968 Compare Command: Supported 00:17:03.968 Write Uncorrectable Command: Not Supported 00:17:03.968 Dataset Management Command: Supported 00:17:03.968 Write Zeroes Command: Supported 00:17:03.968 Set Features Save Field: Not Supported 00:17:03.968 Reservations: Not Supported 00:17:03.968 Timestamp: Not Supported 00:17:03.968 Copy: Supported 00:17:03.968 Volatile Write Cache: Present 00:17:03.968 Atomic Write Unit (Normal): 1 00:17:03.968 Atomic Write Unit (PFail): 1 00:17:03.968 Atomic Compare & Write Unit: 1 00:17:03.968 Fused Compare & Write: Supported 00:17:03.968 Scatter-Gather List 00:17:03.968 SGL Command Set: Supported (Dword aligned) 00:17:03.968 SGL Keyed: Not Supported 00:17:03.968 SGL Bit Bucket Descriptor: Not Supported 00:17:03.968 SGL Metadata Pointer: Not Supported 00:17:03.968 Oversized SGL: Not Supported 00:17:03.968 SGL Metadata Address: Not Supported 00:17:03.968 SGL Offset: Not Supported 00:17:03.968 Transport SGL Data Block: Not Supported 00:17:03.968 Replay Protected Memory Block: Not Supported 00:17:03.968 00:17:03.968 Firmware Slot Information 00:17:03.968 ========================= 00:17:03.968 Active slot: 1 00:17:03.968 Slot 1 Firmware Revision: 24.09 00:17:03.968 00:17:03.968 00:17:03.968 Commands Supported and Effects 00:17:03.968 ============================== 00:17:03.968 Admin Commands 00:17:03.968 -------------- 00:17:03.968 Get Log Page (02h): Supported 00:17:03.968 Identify (06h): Supported 00:17:03.968 Abort (08h): Supported 00:17:03.968 Set Features (09h): Supported 00:17:03.968 Get Features (0Ah): Supported 00:17:03.968 Asynchronous Event Request (0Ch): Supported 00:17:03.968 Keep Alive (18h): Supported 00:17:03.968 I/O Commands 00:17:03.968 ------------ 00:17:03.968 Flush (00h): Supported LBA-Change 00:17:03.968 Write (01h): Supported LBA-Change 00:17:03.968 Read (02h): Supported 00:17:03.968 Compare (05h): Supported 00:17:03.968 Write Zeroes (08h): Supported LBA-Change 00:17:03.968 Dataset Management (09h): Supported LBA-Change 00:17:03.968 Copy (19h): Supported LBA-Change 00:17:03.968 00:17:03.968 Error Log 00:17:03.968 ========= 00:17:03.968 00:17:03.968 Arbitration 00:17:03.968 =========== 00:17:03.968 Arbitration Burst: 1 00:17:03.968 00:17:03.968 Power Management 00:17:03.968 ================ 00:17:03.968 Number of Power States: 1 00:17:03.968 Current Power State: Power State #0 00:17:03.968 Power State #0: 00:17:03.968 Max Power: 0.00 W 00:17:03.968 Non-Operational State: Operational 00:17:03.968 Entry Latency: Not Reported 00:17:03.968 Exit Latency: Not Reported 00:17:03.968 Relative Read Throughput: 0 00:17:03.968 Relative Read Latency: 0 00:17:03.968 Relative Write Throughput: 0 00:17:03.968 Relative Write Latency: 0 00:17:03.968 Idle Power: Not Reported 00:17:03.968 Active Power: Not Reported 00:17:03.968 Non-Operational Permissive Mode: Not Supported 00:17:03.968 00:17:03.968 Health Information 00:17:03.968 ================== 00:17:03.968 Critical Warnings: 00:17:03.968 Available Spare Space: OK 00:17:03.968 Temperature: OK 00:17:03.968 Device Reliability: OK 00:17:03.968 Read Only: No 00:17:03.968 Volatile Memory Backup: OK 00:17:03.968 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:03.968 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:03.968 Available Spare: 0% 00:17:03.968 Available Sp[2024-07-26 01:52:45.903164] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:03.968 [2024-07-26 01:52:45.903181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:03.968 [2024-07-26 01:52:45.903227] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:03.968 [2024-07-26 01:52:45.903246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.968 [2024-07-26 01:52:45.903257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.968 [2024-07-26 01:52:45.903267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.968 [2024-07-26 01:52:45.903277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.968 [2024-07-26 01:52:45.907071] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:03.968 [2024-07-26 01:52:45.907097] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:03.968 [2024-07-26 01:52:45.907713] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:03.968 [2024-07-26 01:52:45.907797] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:03.968 [2024-07-26 01:52:45.907811] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:03.968 [2024-07-26 01:52:45.908731] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:03.968 [2024-07-26 01:52:45.908754] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:03.968 [2024-07-26 01:52:45.908810] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:03.968 [2024-07-26 01:52:45.910778] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:03.968 are Threshold: 0% 00:17:03.968 Life Percentage Used: 0% 00:17:03.968 Data Units Read: 0 00:17:03.968 Data Units Written: 0 00:17:03.968 Host Read Commands: 0 00:17:03.968 Host Write Commands: 0 00:17:03.968 Controller Busy Time: 0 minutes 00:17:03.968 Power Cycles: 0 00:17:03.968 Power On Hours: 0 hours 00:17:03.968 Unsafe Shutdowns: 0 00:17:03.968 Unrecoverable Media Errors: 0 00:17:03.968 Lifetime Error Log Entries: 0 00:17:03.968 Warning Temperature Time: 0 minutes 00:17:03.968 Critical Temperature Time: 0 minutes 00:17:03.968 00:17:03.968 Number of Queues 00:17:03.968 ================ 00:17:03.968 Number of I/O Submission Queues: 127 00:17:03.968 Number of I/O Completion Queues: 127 00:17:03.968 00:17:03.968 Active Namespaces 00:17:03.968 ================= 00:17:03.968 Namespace ID:1 00:17:03.968 Error Recovery Timeout: Unlimited 00:17:03.968 Command Set Identifier: NVM (00h) 00:17:03.968 Deallocate: Supported 00:17:03.968 Deallocated/Unwritten Error: Not Supported 00:17:03.968 Deallocated Read Value: Unknown 00:17:03.968 Deallocate in Write Zeroes: Not Supported 00:17:03.968 Deallocated Guard Field: 0xFFFF 00:17:03.968 Flush: Supported 00:17:03.968 Reservation: Supported 00:17:03.968 Namespace Sharing Capabilities: Multiple Controllers 00:17:03.968 Size (in LBAs): 131072 (0GiB) 00:17:03.968 Capacity (in LBAs): 131072 (0GiB) 00:17:03.968 Utilization (in LBAs): 131072 (0GiB) 00:17:03.968 NGUID: 192DC55AFED34CB5BFE2A731A920B504 00:17:03.968 UUID: 192dc55a-fed3-4cb5-bfe2-a731a920b504 00:17:03.968 Thin Provisioning: Not Supported 00:17:03.968 Per-NS Atomic Units: Yes 00:17:03.968 Atomic Boundary Size (Normal): 0 00:17:03.968 Atomic Boundary Size (PFail): 0 00:17:03.968 Atomic Boundary Offset: 0 00:17:03.968 Maximum Single Source Range Length: 65535 00:17:03.968 Maximum Copy Length: 65535 00:17:03.968 Maximum Source Range Count: 1 00:17:03.968 NGUID/EUI64 Never Reused: No 00:17:03.968 Namespace Write Protected: No 00:17:03.968 Number of LBA Formats: 1 00:17:03.968 Current LBA Format: LBA Format #00 00:17:03.968 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:03.968 00:17:03.968 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:04.225 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.225 [2024-07-26 01:52:46.149899] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:09.488 Initializing NVMe Controllers 00:17:09.488 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:09.488 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:09.488 Initialization complete. Launching workers. 00:17:09.488 ======================================================== 00:17:09.488 Latency(us) 00:17:09.488 Device Information : IOPS MiB/s Average min max 00:17:09.488 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34904.60 136.35 3665.95 1163.89 7598.81 00:17:09.488 ======================================================== 00:17:09.488 Total : 34904.60 136.35 3665.95 1163.89 7598.81 00:17:09.488 00:17:09.488 [2024-07-26 01:52:51.171317] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:09.488 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:09.488 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.488 [2024-07-26 01:52:51.411496] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:14.760 Initializing NVMe Controllers 00:17:14.760 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:14.760 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:14.760 Initialization complete. Launching workers. 00:17:14.760 ======================================================== 00:17:14.760 Latency(us) 00:17:14.760 Device Information : IOPS MiB/s Average min max 00:17:14.760 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16031.71 62.62 7989.43 6957.52 14961.53 00:17:14.760 ======================================================== 00:17:14.760 Total : 16031.71 62.62 7989.43 6957.52 14961.53 00:17:14.760 00:17:14.760 [2024-07-26 01:52:56.458274] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:14.760 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:14.760 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.760 [2024-07-26 01:52:56.668281] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:20.024 [2024-07-26 01:53:01.752477] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:20.024 Initializing NVMe Controllers 00:17:20.024 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:20.024 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:20.024 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:20.024 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:20.024 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:20.024 Initialization complete. Launching workers. 00:17:20.024 Starting thread on core 2 00:17:20.024 Starting thread on core 3 00:17:20.024 Starting thread on core 1 00:17:20.024 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:20.024 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.282 [2024-07-26 01:53:02.073564] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:23.562 [2024-07-26 01:53:05.140238] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:23.562 Initializing NVMe Controllers 00:17:23.562 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:23.562 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:23.562 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:23.562 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:23.562 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:23.562 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:23.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:23.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:23.562 Initialization complete. Launching workers. 00:17:23.562 Starting thread on core 1 with urgent priority queue 00:17:23.562 Starting thread on core 2 with urgent priority queue 00:17:23.562 Starting thread on core 3 with urgent priority queue 00:17:23.562 Starting thread on core 0 with urgent priority queue 00:17:23.562 SPDK bdev Controller (SPDK1 ) core 0: 5963.00 IO/s 16.77 secs/100000 ios 00:17:23.562 SPDK bdev Controller (SPDK1 ) core 1: 5886.67 IO/s 16.99 secs/100000 ios 00:17:23.562 SPDK bdev Controller (SPDK1 ) core 2: 5815.33 IO/s 17.20 secs/100000 ios 00:17:23.562 SPDK bdev Controller (SPDK1 ) core 3: 5841.67 IO/s 17.12 secs/100000 ios 00:17:23.562 ======================================================== 00:17:23.562 00:17:23.562 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:23.562 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.562 [2024-07-26 01:53:05.441542] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:23.562 Initializing NVMe Controllers 00:17:23.562 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:23.562 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:23.562 Namespace ID: 1 size: 0GB 00:17:23.562 Initialization complete. 00:17:23.562 INFO: using host memory buffer for IO 00:17:23.562 Hello world! 00:17:23.562 [2024-07-26 01:53:05.475085] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:23.562 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:23.562 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.820 [2024-07-26 01:53:05.757502] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:25.191 Initializing NVMe Controllers 00:17:25.191 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:25.191 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:25.191 Initialization complete. Launching workers. 00:17:25.191 submit (in ns) avg, min, max = 7830.2, 3506.7, 4016774.4 00:17:25.191 complete (in ns) avg, min, max = 26869.2, 2065.6, 4023214.4 00:17:25.191 00:17:25.191 Submit histogram 00:17:25.191 ================ 00:17:25.191 Range in us Cumulative Count 00:17:25.191 3.484 - 3.508: 0.0075% ( 1) 00:17:25.191 3.508 - 3.532: 0.4668% ( 61) 00:17:25.191 3.532 - 3.556: 1.2950% ( 110) 00:17:25.191 3.556 - 3.579: 4.2840% ( 397) 00:17:25.191 3.579 - 3.603: 8.7863% ( 598) 00:17:25.191 3.603 - 3.627: 16.4358% ( 1016) 00:17:25.191 3.627 - 3.650: 26.3289% ( 1314) 00:17:25.191 3.650 - 3.674: 36.5081% ( 1352) 00:17:25.191 3.674 - 3.698: 44.8125% ( 1103) 00:17:25.191 3.698 - 3.721: 51.9651% ( 950) 00:17:25.191 3.721 - 3.745: 57.4386% ( 727) 00:17:25.191 3.745 - 3.769: 61.9334% ( 597) 00:17:25.191 3.769 - 3.793: 65.7356% ( 505) 00:17:25.191 3.793 - 3.816: 68.9354% ( 425) 00:17:25.191 3.816 - 3.840: 72.2557% ( 441) 00:17:25.191 3.840 - 3.864: 75.8470% ( 477) 00:17:25.191 3.864 - 3.887: 79.3781% ( 469) 00:17:25.191 3.887 - 3.911: 82.8189% ( 457) 00:17:25.191 3.911 - 3.935: 85.9133% ( 411) 00:17:25.191 3.935 - 3.959: 88.0590% ( 285) 00:17:25.191 3.959 - 3.982: 89.8961% ( 244) 00:17:25.191 3.982 - 4.006: 91.6202% ( 229) 00:17:25.191 4.006 - 4.030: 92.9604% ( 178) 00:17:25.191 4.030 - 4.053: 94.1349% ( 156) 00:17:25.191 4.053 - 4.077: 95.2793% ( 152) 00:17:25.191 4.077 - 4.101: 95.8816% ( 80) 00:17:25.191 4.101 - 4.124: 96.3484% ( 62) 00:17:25.191 4.124 - 4.148: 96.5969% ( 33) 00:17:25.191 4.148 - 4.172: 96.8303% ( 31) 00:17:25.191 4.172 - 4.196: 96.9733% ( 19) 00:17:25.191 4.196 - 4.219: 97.0938% ( 16) 00:17:25.191 4.219 - 4.243: 97.2143% ( 16) 00:17:25.191 4.243 - 4.267: 97.2670% ( 7) 00:17:25.191 4.267 - 4.290: 97.3799% ( 15) 00:17:25.191 4.290 - 4.314: 97.4477% ( 9) 00:17:25.191 4.314 - 4.338: 97.4627% ( 2) 00:17:25.191 4.338 - 4.361: 97.5079% ( 6) 00:17:25.191 4.361 - 4.385: 97.5380% ( 4) 00:17:25.191 4.385 - 4.409: 97.5907% ( 7) 00:17:25.191 4.409 - 4.433: 97.6133% ( 3) 00:17:25.191 4.433 - 4.456: 97.6284% ( 2) 00:17:25.191 4.456 - 4.480: 97.6359% ( 1) 00:17:25.191 4.480 - 4.504: 97.6434% ( 1) 00:17:25.191 4.504 - 4.527: 97.6660% ( 3) 00:17:25.191 4.527 - 4.551: 97.6886% ( 3) 00:17:25.191 4.551 - 4.575: 97.7187% ( 4) 00:17:25.191 4.599 - 4.622: 97.7262% ( 1) 00:17:25.191 4.622 - 4.646: 97.7413% ( 2) 00:17:25.191 4.646 - 4.670: 97.7639% ( 3) 00:17:25.191 4.670 - 4.693: 97.7714% ( 1) 00:17:25.191 4.693 - 4.717: 97.7940% ( 3) 00:17:25.191 4.717 - 4.741: 97.8467% ( 7) 00:17:25.191 4.741 - 4.764: 97.8618% ( 2) 00:17:25.191 4.764 - 4.788: 97.8844% ( 3) 00:17:25.191 4.788 - 4.812: 97.9521% ( 9) 00:17:25.191 4.812 - 4.836: 97.9747% ( 3) 00:17:25.191 4.836 - 4.859: 98.0274% ( 7) 00:17:25.191 4.859 - 4.883: 98.0425% ( 2) 00:17:25.191 4.883 - 4.907: 98.0651% ( 3) 00:17:25.191 4.907 - 4.930: 98.0726% ( 1) 00:17:25.191 4.930 - 4.954: 98.1253% ( 7) 00:17:25.191 4.954 - 4.978: 98.1479% ( 3) 00:17:25.191 4.978 - 5.001: 98.1629% ( 2) 00:17:25.191 5.001 - 5.025: 98.2081% ( 6) 00:17:25.191 5.025 - 5.049: 98.2156% ( 1) 00:17:25.191 5.049 - 5.073: 98.2232% ( 1) 00:17:25.191 5.073 - 5.096: 98.2457% ( 3) 00:17:25.191 5.096 - 5.120: 98.2683% ( 3) 00:17:25.191 5.120 - 5.144: 98.2759% ( 1) 00:17:25.191 5.144 - 5.167: 98.2834% ( 1) 00:17:25.191 5.167 - 5.191: 98.2909% ( 1) 00:17:25.191 5.191 - 5.215: 98.3210% ( 4) 00:17:25.191 5.215 - 5.239: 98.3286% ( 1) 00:17:25.191 5.286 - 5.310: 98.3361% ( 1) 00:17:25.191 5.310 - 5.333: 98.3512% ( 2) 00:17:25.191 5.333 - 5.357: 98.3662% ( 2) 00:17:25.191 5.357 - 5.381: 98.3737% ( 1) 00:17:25.191 5.404 - 5.428: 98.3813% ( 1) 00:17:25.191 5.428 - 5.452: 98.3888% ( 1) 00:17:25.191 5.476 - 5.499: 98.3963% ( 1) 00:17:25.191 5.523 - 5.547: 98.4039% ( 1) 00:17:25.191 5.689 - 5.713: 98.4114% ( 1) 00:17:25.191 5.736 - 5.760: 98.4189% ( 1) 00:17:25.191 5.855 - 5.879: 98.4264% ( 1) 00:17:25.191 6.044 - 6.068: 98.4340% ( 1) 00:17:25.191 6.116 - 6.163: 98.4415% ( 1) 00:17:25.191 6.163 - 6.210: 98.4490% ( 1) 00:17:25.191 6.305 - 6.353: 98.4867% ( 5) 00:17:25.191 6.353 - 6.400: 98.4942% ( 1) 00:17:25.191 6.400 - 6.447: 98.5017% ( 1) 00:17:25.191 6.447 - 6.495: 98.5168% ( 2) 00:17:25.191 6.495 - 6.542: 98.5318% ( 2) 00:17:25.191 6.542 - 6.590: 98.5469% ( 2) 00:17:25.191 6.684 - 6.732: 98.5620% ( 2) 00:17:25.191 6.827 - 6.874: 98.5770% ( 2) 00:17:25.191 6.874 - 6.921: 98.5921% ( 2) 00:17:25.191 6.969 - 7.016: 98.5996% ( 1) 00:17:25.191 7.016 - 7.064: 98.6071% ( 1) 00:17:25.191 7.111 - 7.159: 98.6147% ( 1) 00:17:25.191 7.159 - 7.206: 98.6222% ( 1) 00:17:25.191 7.206 - 7.253: 98.6297% ( 1) 00:17:25.191 7.301 - 7.348: 98.6373% ( 1) 00:17:25.191 7.348 - 7.396: 98.6448% ( 1) 00:17:25.191 7.396 - 7.443: 98.6598% ( 2) 00:17:25.191 7.443 - 7.490: 98.6674% ( 1) 00:17:25.191 7.538 - 7.585: 98.6824% ( 2) 00:17:25.191 7.585 - 7.633: 98.6900% ( 1) 00:17:25.191 7.680 - 7.727: 98.6975% ( 1) 00:17:25.191 7.727 - 7.775: 98.7050% ( 1) 00:17:25.191 7.775 - 7.822: 98.7125% ( 1) 00:17:25.191 7.870 - 7.917: 98.7427% ( 4) 00:17:25.191 7.964 - 8.012: 98.7502% ( 1) 00:17:25.191 8.012 - 8.059: 98.7577% ( 1) 00:17:25.191 8.059 - 8.107: 98.7652% ( 1) 00:17:25.191 8.107 - 8.154: 98.7728% ( 1) 00:17:25.191 8.154 - 8.201: 98.7803% ( 1) 00:17:25.191 8.296 - 8.344: 98.7878% ( 1) 00:17:25.191 8.439 - 8.486: 98.7954% ( 1) 00:17:25.191 8.723 - 8.770: 98.8029% ( 1) 00:17:25.191 8.960 - 9.007: 98.8104% ( 1) 00:17:25.191 9.102 - 9.150: 98.8179% ( 1) 00:17:25.191 9.197 - 9.244: 98.8255% ( 1) 00:17:25.191 9.244 - 9.292: 98.8330% ( 1) 00:17:25.192 9.387 - 9.434: 98.8405% ( 1) 00:17:25.192 9.481 - 9.529: 98.8631% ( 3) 00:17:25.192 9.813 - 9.861: 98.8707% ( 1) 00:17:25.192 10.003 - 10.050: 98.8782% ( 1) 00:17:25.192 10.193 - 10.240: 98.8932% ( 2) 00:17:25.192 10.382 - 10.430: 98.9008% ( 1) 00:17:25.192 10.714 - 10.761: 98.9083% ( 1) 00:17:25.192 10.999 - 11.046: 98.9158% ( 1) 00:17:25.192 11.141 - 11.188: 98.9234% ( 1) 00:17:25.192 11.188 - 11.236: 98.9309% ( 1) 00:17:25.192 11.236 - 11.283: 98.9384% ( 1) 00:17:25.192 11.330 - 11.378: 98.9459% ( 1) 00:17:25.192 11.520 - 11.567: 98.9535% ( 1) 00:17:25.192 11.899 - 11.947: 98.9685% ( 2) 00:17:25.192 12.041 - 12.089: 98.9761% ( 1) 00:17:25.192 12.089 - 12.136: 98.9911% ( 2) 00:17:25.192 12.326 - 12.421: 98.9986% ( 1) 00:17:25.192 12.421 - 12.516: 99.0062% ( 1) 00:17:25.192 12.610 - 12.705: 99.0137% ( 1) 00:17:25.192 12.705 - 12.800: 99.0288% ( 2) 00:17:25.192 12.800 - 12.895: 99.0438% ( 2) 00:17:25.192 12.990 - 13.084: 99.0739% ( 4) 00:17:25.192 13.084 - 13.179: 99.0815% ( 1) 00:17:25.192 13.179 - 13.274: 99.0890% ( 1) 00:17:25.192 13.369 - 13.464: 99.1116% ( 3) 00:17:25.192 13.748 - 13.843: 99.1191% ( 1) 00:17:25.192 13.843 - 13.938: 99.1266% ( 1) 00:17:25.192 14.317 - 14.412: 99.1417% ( 2) 00:17:25.192 14.507 - 14.601: 99.1492% ( 1) 00:17:25.192 14.601 - 14.696: 99.1718% ( 3) 00:17:25.192 15.076 - 15.170: 99.1793% ( 1) 00:17:25.192 17.161 - 17.256: 99.1869% ( 1) 00:17:25.192 17.256 - 17.351: 99.2170% ( 4) 00:17:25.192 17.351 - 17.446: 99.2320% ( 2) 00:17:25.192 17.446 - 17.541: 99.2622% ( 4) 00:17:25.192 17.541 - 17.636: 99.2697% ( 1) 00:17:25.192 17.636 - 17.730: 99.3299% ( 8) 00:17:25.192 17.730 - 17.825: 99.3525% ( 3) 00:17:25.192 17.825 - 17.920: 99.3751% ( 3) 00:17:25.192 17.920 - 18.015: 99.4052% ( 4) 00:17:25.192 18.015 - 18.110: 99.4127% ( 1) 00:17:25.192 18.110 - 18.204: 99.4504% ( 5) 00:17:25.192 18.204 - 18.299: 99.4880% ( 5) 00:17:25.192 18.299 - 18.394: 99.5031% ( 2) 00:17:25.192 18.394 - 18.489: 99.5633% ( 8) 00:17:25.192 18.489 - 18.584: 99.6085% ( 6) 00:17:25.192 18.584 - 18.679: 99.6386% ( 4) 00:17:25.192 18.679 - 18.773: 99.6461% ( 1) 00:17:25.192 18.773 - 18.868: 99.6838% ( 5) 00:17:25.192 18.868 - 18.963: 99.7214% ( 5) 00:17:25.192 18.963 - 19.058: 99.7515% ( 4) 00:17:25.192 19.058 - 19.153: 99.7666% ( 2) 00:17:25.192 19.247 - 19.342: 99.7817% ( 2) 00:17:25.192 19.342 - 19.437: 99.7892% ( 1) 00:17:25.192 19.437 - 19.532: 99.7967% ( 1) 00:17:25.192 19.532 - 19.627: 99.8118% ( 2) 00:17:25.192 19.627 - 19.721: 99.8419% ( 4) 00:17:25.192 19.816 - 19.911: 99.8494% ( 1) 00:17:25.192 20.006 - 20.101: 99.8569% ( 1) 00:17:25.192 20.101 - 20.196: 99.8645% ( 1) 00:17:25.192 20.290 - 20.385: 99.8720% ( 1) 00:17:25.192 23.609 - 23.704: 99.8795% ( 1) 00:17:25.192 23.893 - 23.988: 99.8871% ( 1) 00:17:25.192 24.462 - 24.652: 99.8946% ( 1) 00:17:25.192 25.600 - 25.790: 99.9021% ( 1) 00:17:25.192 3980.705 - 4004.978: 99.9699% ( 9) 00:17:25.192 4004.978 - 4029.250: 100.0000% ( 4) 00:17:25.192 00:17:25.192 Complete histogram 00:17:25.192 ================== 00:17:25.192 Range in us Cumulative Count 00:17:25.192 2.062 - 2.074: 2.4695% ( 328) 00:17:25.192 2.074 - 2.086: 35.9434% ( 4446) 00:17:25.192 2.086 - 2.098: 44.6619% ( 1158) 00:17:25.192 2.098 - 2.110: 48.7954% ( 549) 00:17:25.192 2.110 - 2.121: 59.3209% ( 1398) 00:17:25.192 2.121 - 2.133: 61.5193% ( 292) 00:17:25.192 2.133 - 2.145: 67.3393% ( 773) 00:17:25.192 2.145 - 2.157: 78.4370% ( 1474) 00:17:25.192 2.157 - 2.169: 79.8600% ( 189) 00:17:25.192 2.169 - 2.181: 82.7963% ( 390) 00:17:25.192 2.181 - 2.193: 86.6812% ( 516) 00:17:25.192 2.193 - 2.204: 87.5245% ( 112) 00:17:25.192 2.204 - 2.216: 88.7216% ( 159) 00:17:25.192 2.216 - 2.228: 91.3341% ( 347) 00:17:25.192 2.228 - 2.240: 93.3670% ( 270) 00:17:25.192 2.240 - 2.252: 94.3156% ( 126) 00:17:25.192 2.252 - 2.264: 95.0083% ( 92) 00:17:25.192 2.264 - 2.276: 95.2191% ( 28) 00:17:25.192 2.276 - 2.287: 95.3923% ( 23) 00:17:25.192 2.287 - 2.299: 95.6031% ( 28) 00:17:25.192 2.299 - 2.311: 95.9419% ( 45) 00:17:25.192 2.311 - 2.323: 96.1602% ( 29) 00:17:25.192 2.323 - 2.335: 96.2204% ( 8) 00:17:25.192 2.335 - 2.347: 96.2957% ( 10) 00:17:25.192 2.347 - 2.359: 96.4313% ( 18) 00:17:25.192 2.359 - 2.370: 96.6571% ( 30) 00:17:25.192 2.370 - 2.382: 96.9508% ( 39) 00:17:25.192 2.382 - 2.394: 97.2594% ( 41) 00:17:25.192 2.394 - 2.406: 97.4928% ( 31) 00:17:25.192 2.406 - 2.418: 97.7112% ( 29) 00:17:25.192 2.418 - 2.430: 97.9220% ( 28) 00:17:25.192 2.430 - 2.441: 98.1178% ( 26) 00:17:25.192 2.441 - 2.453: 98.1705% ( 7) 00:17:25.192 2.453 - 2.465: 98.2759% ( 14) 00:17:25.192 2.465 - 2.477: 98.3361% ( 8) 00:17:25.192 2.477 - 2.489: 98.3813% ( 6) 00:17:25.192 2.489 - 2.501: 98.4039% ( 3) 00:17:25.192 2.501 - 2.513: 98.4114% ( 1) 00:17:25.192 2.513 - 2.524: 98.4340% ( 3) 00:17:25.192 2.524 - 2.536: 98.4415% ( 1) 00:17:25.192 2.560 - 2.572: 98.4490% ( 1) 00:17:25.192 2.584 - 2.596: 98.4566% ( 1) 00:17:25.192 2.596 - 2.607: 98.4716% ( 2) 00:17:25.192 2.607 - 2.619: 98.4867% ( 2) 00:17:25.192 2.643 - 2.655: 98.4942% ( 1) 00:17:25.192 2.655 - 2.667: 98.5093% ( 2) 00:17:25.192 2.679 - 2.690: 98.5168% ( 1) 00:17:25.192 2.714 - 2.726: 98.5243% ( 1) 00:17:25.192 3.224 - 3.247: 98.5318% ( 1) 00:17:25.192 3.295 - 3.319: 98.5469% ( 2) 00:17:25.192 3.319 - 3.342: 98.5620% ( 2) 00:17:25.192 3.342 - 3.366: 98.5770% ( 2) 00:17:25.192 3.437 - 3.461: 98.5921% ( 2) 00:17:25.192 3.461 - 3.484: 98.5996% ( 1) 00:17:25.192 3.484 - 3.508: 98.6071% ( 1) 00:17:25.192 3.508 - 3.532: 98.6147% ( 1) 00:17:25.192 3.532 - 3.556: 98.6222% ( 1) 00:17:25.192 3.579 - 3.603: 98.6297% ( 1) 00:17:25.192 3.650 - 3.674: 98.6448% ( 2) 00:17:25.192 3.793 - 3.816: 98.6523% ( 1) 00:17:25.192 3.816 - 3.840: 98.6598% ( 1) 00:17:25.192 3.864 - 3.887: 9[2024-07-26 01:53:06.780716] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:25.192 8.6674% ( 1) 00:17:25.192 4.978 - 5.001: 98.6824% ( 2) 00:17:25.192 5.001 - 5.025: 98.6900% ( 1) 00:17:25.192 5.167 - 5.191: 98.6975% ( 1) 00:17:25.192 5.310 - 5.333: 98.7050% ( 1) 00:17:25.192 5.547 - 5.570: 98.7125% ( 1) 00:17:25.192 5.736 - 5.760: 98.7201% ( 1) 00:17:25.192 5.784 - 5.807: 98.7276% ( 1) 00:17:25.192 5.807 - 5.831: 98.7427% ( 2) 00:17:25.192 5.855 - 5.879: 98.7502% ( 1) 00:17:25.192 5.926 - 5.950: 98.7577% ( 1) 00:17:25.192 5.997 - 6.021: 98.7652% ( 1) 00:17:25.192 6.044 - 6.068: 98.7803% ( 2) 00:17:25.192 6.163 - 6.210: 98.7878% ( 1) 00:17:25.192 6.210 - 6.258: 98.7954% ( 1) 00:17:25.192 6.305 - 6.353: 98.8029% ( 1) 00:17:25.192 6.495 - 6.542: 98.8179% ( 2) 00:17:25.192 6.542 - 6.590: 98.8405% ( 3) 00:17:25.192 6.921 - 6.969: 98.8481% ( 1) 00:17:25.192 7.064 - 7.111: 98.8556% ( 1) 00:17:25.192 7.206 - 7.253: 98.8631% ( 1) 00:17:25.192 7.348 - 7.396: 98.8707% ( 1) 00:17:25.192 7.538 - 7.585: 98.8782% ( 1) 00:17:25.192 8.201 - 8.249: 98.8857% ( 1) 00:17:25.192 8.344 - 8.391: 98.8932% ( 1) 00:17:25.192 10.999 - 11.046: 98.9008% ( 1) 00:17:25.192 12.041 - 12.089: 98.9083% ( 1) 00:17:25.192 15.550 - 15.644: 98.9158% ( 1) 00:17:25.192 15.644 - 15.739: 98.9309% ( 2) 00:17:25.192 15.739 - 15.834: 98.9459% ( 2) 00:17:25.192 15.834 - 15.929: 98.9610% ( 2) 00:17:25.192 15.929 - 16.024: 98.9836% ( 3) 00:17:25.192 16.024 - 16.119: 98.9986% ( 2) 00:17:25.192 16.119 - 16.213: 99.0212% ( 3) 00:17:25.192 16.213 - 16.308: 99.0589% ( 5) 00:17:25.192 16.308 - 16.403: 99.0890% ( 4) 00:17:25.192 16.403 - 16.498: 99.1417% ( 7) 00:17:25.192 16.498 - 16.593: 99.1944% ( 7) 00:17:25.192 16.593 - 16.687: 99.2170% ( 3) 00:17:25.192 16.687 - 16.782: 99.2320% ( 2) 00:17:25.192 16.782 - 16.877: 99.2622% ( 4) 00:17:25.192 16.972 - 17.067: 99.2847% ( 3) 00:17:25.192 17.067 - 17.161: 99.2998% ( 2) 00:17:25.192 17.161 - 17.256: 99.3299% ( 4) 00:17:25.192 17.446 - 17.541: 99.3374% ( 1) 00:17:25.192 17.541 - 17.636: 99.3450% ( 1) 00:17:25.192 17.730 - 17.825: 99.3600% ( 2) 00:17:25.192 17.920 - 18.015: 99.3676% ( 1) 00:17:25.193 18.015 - 18.110: 99.3751% ( 1) 00:17:25.193 18.394 - 18.489: 99.3826% ( 1) 00:17:25.193 3543.799 - 3568.071: 99.3902% ( 1) 00:17:25.193 3980.705 - 4004.978: 99.9021% ( 68) 00:17:25.193 4004.978 - 4029.250: 100.0000% ( 13) 00:17:25.193 00:17:25.193 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:25.193 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:25.193 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:25.193 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:25.193 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:25.193 [ 00:17:25.193 { 00:17:25.193 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:25.193 "subtype": "Discovery", 00:17:25.193 "listen_addresses": [], 00:17:25.193 "allow_any_host": true, 00:17:25.193 "hosts": [] 00:17:25.193 }, 00:17:25.193 { 00:17:25.193 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:25.193 "subtype": "NVMe", 00:17:25.193 "listen_addresses": [ 00:17:25.193 { 00:17:25.193 "trtype": "VFIOUSER", 00:17:25.193 "adrfam": "IPv4", 00:17:25.193 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:25.193 "trsvcid": "0" 00:17:25.193 } 00:17:25.193 ], 00:17:25.193 "allow_any_host": true, 00:17:25.193 "hosts": [], 00:17:25.193 "serial_number": "SPDK1", 00:17:25.193 "model_number": "SPDK bdev Controller", 00:17:25.193 "max_namespaces": 32, 00:17:25.193 "min_cntlid": 1, 00:17:25.193 "max_cntlid": 65519, 00:17:25.193 "namespaces": [ 00:17:25.193 { 00:17:25.193 "nsid": 1, 00:17:25.193 "bdev_name": "Malloc1", 00:17:25.193 "name": "Malloc1", 00:17:25.193 "nguid": "192DC55AFED34CB5BFE2A731A920B504", 00:17:25.193 "uuid": "192dc55a-fed3-4cb5-bfe2-a731a920b504" 00:17:25.193 } 00:17:25.193 ] 00:17:25.193 }, 00:17:25.193 { 00:17:25.193 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:25.193 "subtype": "NVMe", 00:17:25.193 "listen_addresses": [ 00:17:25.193 { 00:17:25.193 "trtype": "VFIOUSER", 00:17:25.193 "adrfam": "IPv4", 00:17:25.193 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:25.193 "trsvcid": "0" 00:17:25.193 } 00:17:25.193 ], 00:17:25.193 "allow_any_host": true, 00:17:25.193 "hosts": [], 00:17:25.193 "serial_number": "SPDK2", 00:17:25.193 "model_number": "SPDK bdev Controller", 00:17:25.193 "max_namespaces": 32, 00:17:25.193 "min_cntlid": 1, 00:17:25.193 "max_cntlid": 65519, 00:17:25.193 "namespaces": [ 00:17:25.193 { 00:17:25.193 "nsid": 1, 00:17:25.193 "bdev_name": "Malloc2", 00:17:25.193 "name": "Malloc2", 00:17:25.193 "nguid": "AAEA8FF68F8545CCB3591C4F549EE06C", 00:17:25.193 "uuid": "aaea8ff6-8f85-45cc-b359-1c4f549ee06c" 00:17:25.193 } 00:17:25.193 ] 00:17:25.193 } 00:17:25.193 ] 00:17:25.193 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:25.193 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2262022 00:17:25.193 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:25.193 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:25.193 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:25.193 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:25.193 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:25.193 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:25.193 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:25.193 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:25.193 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.450 [2024-07-26 01:53:07.233534] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:25.450 Malloc3 00:17:25.450 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:25.707 [2024-07-26 01:53:07.595152] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:25.707 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:25.707 Asynchronous Event Request test 00:17:25.707 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:25.707 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:25.707 Registering asynchronous event callbacks... 00:17:25.707 Starting namespace attribute notice tests for all controllers... 00:17:25.707 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:25.707 aer_cb - Changed Namespace 00:17:25.707 Cleaning up... 00:17:25.965 [ 00:17:25.965 { 00:17:25.965 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:25.965 "subtype": "Discovery", 00:17:25.965 "listen_addresses": [], 00:17:25.965 "allow_any_host": true, 00:17:25.965 "hosts": [] 00:17:25.965 }, 00:17:25.965 { 00:17:25.965 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:25.965 "subtype": "NVMe", 00:17:25.965 "listen_addresses": [ 00:17:25.965 { 00:17:25.965 "trtype": "VFIOUSER", 00:17:25.965 "adrfam": "IPv4", 00:17:25.965 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:25.965 "trsvcid": "0" 00:17:25.965 } 00:17:25.965 ], 00:17:25.965 "allow_any_host": true, 00:17:25.965 "hosts": [], 00:17:25.965 "serial_number": "SPDK1", 00:17:25.965 "model_number": "SPDK bdev Controller", 00:17:25.965 "max_namespaces": 32, 00:17:25.965 "min_cntlid": 1, 00:17:25.965 "max_cntlid": 65519, 00:17:25.965 "namespaces": [ 00:17:25.965 { 00:17:25.965 "nsid": 1, 00:17:25.965 "bdev_name": "Malloc1", 00:17:25.965 "name": "Malloc1", 00:17:25.965 "nguid": "192DC55AFED34CB5BFE2A731A920B504", 00:17:25.965 "uuid": "192dc55a-fed3-4cb5-bfe2-a731a920b504" 00:17:25.965 }, 00:17:25.965 { 00:17:25.965 "nsid": 2, 00:17:25.965 "bdev_name": "Malloc3", 00:17:25.965 "name": "Malloc3", 00:17:25.965 "nguid": "BE43A7D764DD432C972B84DE63D9BC2C", 00:17:25.965 "uuid": "be43a7d7-64dd-432c-972b-84de63d9bc2c" 00:17:25.965 } 00:17:25.965 ] 00:17:25.965 }, 00:17:25.965 { 00:17:25.965 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:25.965 "subtype": "NVMe", 00:17:25.965 "listen_addresses": [ 00:17:25.965 { 00:17:25.965 "trtype": "VFIOUSER", 00:17:25.965 "adrfam": "IPv4", 00:17:25.965 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:25.965 "trsvcid": "0" 00:17:25.965 } 00:17:25.965 ], 00:17:25.965 "allow_any_host": true, 00:17:25.965 "hosts": [], 00:17:25.965 "serial_number": "SPDK2", 00:17:25.965 "model_number": "SPDK bdev Controller", 00:17:25.965 "max_namespaces": 32, 00:17:25.965 "min_cntlid": 1, 00:17:25.965 "max_cntlid": 65519, 00:17:25.965 "namespaces": [ 00:17:25.965 { 00:17:25.965 "nsid": 1, 00:17:25.965 "bdev_name": "Malloc2", 00:17:25.965 "name": "Malloc2", 00:17:25.965 "nguid": "AAEA8FF68F8545CCB3591C4F549EE06C", 00:17:25.965 "uuid": "aaea8ff6-8f85-45cc-b359-1c4f549ee06c" 00:17:25.965 } 00:17:25.965 ] 00:17:25.965 } 00:17:25.965 ] 00:17:25.965 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2262022 00:17:25.965 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:25.965 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:25.965 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:25.965 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:25.965 [2024-07-26 01:53:07.872124] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:17:25.965 [2024-07-26 01:53:07.872167] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2262150 ] 00:17:25.965 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.965 [2024-07-26 01:53:07.907188] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:25.965 [2024-07-26 01:53:07.913321] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:25.965 [2024-07-26 01:53:07.913356] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f97b35ba000 00:17:25.965 [2024-07-26 01:53:07.914318] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.965 [2024-07-26 01:53:07.915335] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.965 [2024-07-26 01:53:07.916343] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.965 [2024-07-26 01:53:07.917363] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:25.965 [2024-07-26 01:53:07.918369] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:25.965 [2024-07-26 01:53:07.919362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.965 [2024-07-26 01:53:07.920380] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:25.965 [2024-07-26 01:53:07.921388] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.965 [2024-07-26 01:53:07.922412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:25.965 [2024-07-26 01:53:07.922434] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f97b236e000 00:17:25.965 [2024-07-26 01:53:07.923549] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:25.965 [2024-07-26 01:53:07.937721] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:25.965 [2024-07-26 01:53:07.937756] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:25.965 [2024-07-26 01:53:07.942852] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:25.965 [2024-07-26 01:53:07.942904] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:25.965 [2024-07-26 01:53:07.942994] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:25.965 [2024-07-26 01:53:07.943014] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:25.965 [2024-07-26 01:53:07.943024] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:25.965 [2024-07-26 01:53:07.943859] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:25.965 [2024-07-26 01:53:07.943883] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:25.965 [2024-07-26 01:53:07.943897] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:25.965 [2024-07-26 01:53:07.944870] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:25.965 [2024-07-26 01:53:07.944889] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:25.965 [2024-07-26 01:53:07.944902] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:25.965 [2024-07-26 01:53:07.945877] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:25.965 [2024-07-26 01:53:07.945897] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:25.965 [2024-07-26 01:53:07.946880] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:25.965 [2024-07-26 01:53:07.946900] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:25.965 [2024-07-26 01:53:07.946909] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:25.965 [2024-07-26 01:53:07.946927] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:25.965 [2024-07-26 01:53:07.947053] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:25.965 [2024-07-26 01:53:07.947070] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:25.965 [2024-07-26 01:53:07.947079] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:25.965 [2024-07-26 01:53:07.947888] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:25.965 [2024-07-26 01:53:07.948888] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:25.965 [2024-07-26 01:53:07.949899] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:25.965 [2024-07-26 01:53:07.950900] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:25.965 [2024-07-26 01:53:07.950980] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:25.965 [2024-07-26 01:53:07.951917] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:25.965 [2024-07-26 01:53:07.951937] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:25.965 [2024-07-26 01:53:07.951946] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:25.965 [2024-07-26 01:53:07.951969] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:25.965 [2024-07-26 01:53:07.951982] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:25.965 [2024-07-26 01:53:07.952004] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:25.965 [2024-07-26 01:53:07.952013] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:25.965 [2024-07-26 01:53:07.952020] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.965 [2024-07-26 01:53:07.952052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:25.965 [2024-07-26 01:53:07.960076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:25.965 [2024-07-26 01:53:07.960101] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:25.965 [2024-07-26 01:53:07.960110] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:25.965 [2024-07-26 01:53:07.960118] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:25.965 [2024-07-26 01:53:07.960126] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:25.965 [2024-07-26 01:53:07.960134] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:25.965 [2024-07-26 01:53:07.960142] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:25.965 [2024-07-26 01:53:07.960150] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:25.965 [2024-07-26 01:53:07.960167] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:25.965 [2024-07-26 01:53:07.960188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:25.965 [2024-07-26 01:53:07.968069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:25.965 [2024-07-26 01:53:07.968098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.966 [2024-07-26 01:53:07.968113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.966 [2024-07-26 01:53:07.968125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.966 [2024-07-26 01:53:07.968137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.966 [2024-07-26 01:53:07.968146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:25.966 [2024-07-26 01:53:07.968162] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:25.966 [2024-07-26 01:53:07.968177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:26.223 [2024-07-26 01:53:07.976071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:26.223 [2024-07-26 01:53:07.976091] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:26.223 [2024-07-26 01:53:07.976101] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:26.223 [2024-07-26 01:53:07.976117] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:26.223 [2024-07-26 01:53:07.976129] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:26.223 [2024-07-26 01:53:07.976143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:26.223 [2024-07-26 01:53:07.984073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:26.223 [2024-07-26 01:53:07.984148] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:26.223 [2024-07-26 01:53:07.984164] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:26.223 [2024-07-26 01:53:07.984177] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:26.223 [2024-07-26 01:53:07.984186] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:26.223 [2024-07-26 01:53:07.984192] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.223 [2024-07-26 01:53:07.984202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:26.223 [2024-07-26 01:53:07.992070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:26.223 [2024-07-26 01:53:07.992103] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:26.223 [2024-07-26 01:53:07.992127] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:26.223 [2024-07-26 01:53:07.992143] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:26.223 [2024-07-26 01:53:07.992155] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:26.223 [2024-07-26 01:53:07.992163] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:26.223 [2024-07-26 01:53:07.992169] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.223 [2024-07-26 01:53:07.992179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:26.223 [2024-07-26 01:53:08.000071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:26.223 [2024-07-26 01:53:08.000111] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:26.223 [2024-07-26 01:53:08.000127] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:26.223 [2024-07-26 01:53:08.000140] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:26.223 [2024-07-26 01:53:08.000148] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:26.223 [2024-07-26 01:53:08.000155] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.223 [2024-07-26 01:53:08.000165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:26.223 [2024-07-26 01:53:08.008068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:26.223 [2024-07-26 01:53:08.008100] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:26.223 [2024-07-26 01:53:08.008113] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:26.223 [2024-07-26 01:53:08.008127] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:26.223 [2024-07-26 01:53:08.008142] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:26.223 [2024-07-26 01:53:08.008152] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:26.223 [2024-07-26 01:53:08.008161] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:26.223 [2024-07-26 01:53:08.008170] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:26.223 [2024-07-26 01:53:08.008178] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:26.223 [2024-07-26 01:53:08.008186] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:26.223 [2024-07-26 01:53:08.008212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:26.223 [2024-07-26 01:53:08.016070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:26.223 [2024-07-26 01:53:08.016107] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:26.223 [2024-07-26 01:53:08.024070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:26.223 [2024-07-26 01:53:08.024095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:26.223 [2024-07-26 01:53:08.032072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:26.223 [2024-07-26 01:53:08.032105] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:26.223 [2024-07-26 01:53:08.040071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:26.223 [2024-07-26 01:53:08.040115] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:26.224 [2024-07-26 01:53:08.040127] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:26.224 [2024-07-26 01:53:08.040133] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:26.224 [2024-07-26 01:53:08.040140] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:26.224 [2024-07-26 01:53:08.040146] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:26.224 [2024-07-26 01:53:08.040156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:26.224 [2024-07-26 01:53:08.040168] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:26.224 [2024-07-26 01:53:08.040176] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:26.224 [2024-07-26 01:53:08.040182] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.224 [2024-07-26 01:53:08.040191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:26.224 [2024-07-26 01:53:08.040202] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:26.224 [2024-07-26 01:53:08.040210] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:26.224 [2024-07-26 01:53:08.040216] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.224 [2024-07-26 01:53:08.040225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:26.224 [2024-07-26 01:53:08.040237] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:26.224 [2024-07-26 01:53:08.040245] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:26.224 [2024-07-26 01:53:08.040251] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.224 [2024-07-26 01:53:08.040261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:26.224 [2024-07-26 01:53:08.048086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:26.224 [2024-07-26 01:53:08.048121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:26.224 [2024-07-26 01:53:08.048138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:26.224 [2024-07-26 01:53:08.048150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:26.224 ===================================================== 00:17:26.224 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:26.224 ===================================================== 00:17:26.224 Controller Capabilities/Features 00:17:26.224 ================================ 00:17:26.224 Vendor ID: 4e58 00:17:26.224 Subsystem Vendor ID: 4e58 00:17:26.224 Serial Number: SPDK2 00:17:26.224 Model Number: SPDK bdev Controller 00:17:26.224 Firmware Version: 24.09 00:17:26.224 Recommended Arb Burst: 6 00:17:26.224 IEEE OUI Identifier: 8d 6b 50 00:17:26.224 Multi-path I/O 00:17:26.224 May have multiple subsystem ports: Yes 00:17:26.224 May have multiple controllers: Yes 00:17:26.224 Associated with SR-IOV VF: No 00:17:26.224 Max Data Transfer Size: 131072 00:17:26.224 Max Number of Namespaces: 32 00:17:26.224 Max Number of I/O Queues: 127 00:17:26.224 NVMe Specification Version (VS): 1.3 00:17:26.224 NVMe Specification Version (Identify): 1.3 00:17:26.224 Maximum Queue Entries: 256 00:17:26.224 Contiguous Queues Required: Yes 00:17:26.224 Arbitration Mechanisms Supported 00:17:26.224 Weighted Round Robin: Not Supported 00:17:26.224 Vendor Specific: Not Supported 00:17:26.224 Reset Timeout: 15000 ms 00:17:26.224 Doorbell Stride: 4 bytes 00:17:26.224 NVM Subsystem Reset: Not Supported 00:17:26.224 Command Sets Supported 00:17:26.224 NVM Command Set: Supported 00:17:26.224 Boot Partition: Not Supported 00:17:26.224 Memory Page Size Minimum: 4096 bytes 00:17:26.224 Memory Page Size Maximum: 4096 bytes 00:17:26.224 Persistent Memory Region: Not Supported 00:17:26.224 Optional Asynchronous Events Supported 00:17:26.224 Namespace Attribute Notices: Supported 00:17:26.224 Firmware Activation Notices: Not Supported 00:17:26.224 ANA Change Notices: Not Supported 00:17:26.224 PLE Aggregate Log Change Notices: Not Supported 00:17:26.224 LBA Status Info Alert Notices: Not Supported 00:17:26.224 EGE Aggregate Log Change Notices: Not Supported 00:17:26.224 Normal NVM Subsystem Shutdown event: Not Supported 00:17:26.224 Zone Descriptor Change Notices: Not Supported 00:17:26.224 Discovery Log Change Notices: Not Supported 00:17:26.224 Controller Attributes 00:17:26.224 128-bit Host Identifier: Supported 00:17:26.224 Non-Operational Permissive Mode: Not Supported 00:17:26.224 NVM Sets: Not Supported 00:17:26.224 Read Recovery Levels: Not Supported 00:17:26.224 Endurance Groups: Not Supported 00:17:26.224 Predictable Latency Mode: Not Supported 00:17:26.224 Traffic Based Keep ALive: Not Supported 00:17:26.224 Namespace Granularity: Not Supported 00:17:26.224 SQ Associations: Not Supported 00:17:26.224 UUID List: Not Supported 00:17:26.224 Multi-Domain Subsystem: Not Supported 00:17:26.224 Fixed Capacity Management: Not Supported 00:17:26.224 Variable Capacity Management: Not Supported 00:17:26.224 Delete Endurance Group: Not Supported 00:17:26.224 Delete NVM Set: Not Supported 00:17:26.224 Extended LBA Formats Supported: Not Supported 00:17:26.224 Flexible Data Placement Supported: Not Supported 00:17:26.224 00:17:26.224 Controller Memory Buffer Support 00:17:26.224 ================================ 00:17:26.224 Supported: No 00:17:26.224 00:17:26.224 Persistent Memory Region Support 00:17:26.224 ================================ 00:17:26.224 Supported: No 00:17:26.224 00:17:26.224 Admin Command Set Attributes 00:17:26.224 ============================ 00:17:26.224 Security Send/Receive: Not Supported 00:17:26.224 Format NVM: Not Supported 00:17:26.224 Firmware Activate/Download: Not Supported 00:17:26.224 Namespace Management: Not Supported 00:17:26.224 Device Self-Test: Not Supported 00:17:26.224 Directives: Not Supported 00:17:26.224 NVMe-MI: Not Supported 00:17:26.224 Virtualization Management: Not Supported 00:17:26.224 Doorbell Buffer Config: Not Supported 00:17:26.224 Get LBA Status Capability: Not Supported 00:17:26.224 Command & Feature Lockdown Capability: Not Supported 00:17:26.224 Abort Command Limit: 4 00:17:26.224 Async Event Request Limit: 4 00:17:26.224 Number of Firmware Slots: N/A 00:17:26.224 Firmware Slot 1 Read-Only: N/A 00:17:26.224 Firmware Activation Without Reset: N/A 00:17:26.224 Multiple Update Detection Support: N/A 00:17:26.224 Firmware Update Granularity: No Information Provided 00:17:26.224 Per-Namespace SMART Log: No 00:17:26.224 Asymmetric Namespace Access Log Page: Not Supported 00:17:26.224 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:26.224 Command Effects Log Page: Supported 00:17:26.224 Get Log Page Extended Data: Supported 00:17:26.224 Telemetry Log Pages: Not Supported 00:17:26.224 Persistent Event Log Pages: Not Supported 00:17:26.224 Supported Log Pages Log Page: May Support 00:17:26.224 Commands Supported & Effects Log Page: Not Supported 00:17:26.224 Feature Identifiers & Effects Log Page:May Support 00:17:26.224 NVMe-MI Commands & Effects Log Page: May Support 00:17:26.224 Data Area 4 for Telemetry Log: Not Supported 00:17:26.224 Error Log Page Entries Supported: 128 00:17:26.224 Keep Alive: Supported 00:17:26.224 Keep Alive Granularity: 10000 ms 00:17:26.224 00:17:26.224 NVM Command Set Attributes 00:17:26.224 ========================== 00:17:26.224 Submission Queue Entry Size 00:17:26.224 Max: 64 00:17:26.224 Min: 64 00:17:26.224 Completion Queue Entry Size 00:17:26.224 Max: 16 00:17:26.224 Min: 16 00:17:26.224 Number of Namespaces: 32 00:17:26.224 Compare Command: Supported 00:17:26.224 Write Uncorrectable Command: Not Supported 00:17:26.224 Dataset Management Command: Supported 00:17:26.224 Write Zeroes Command: Supported 00:17:26.224 Set Features Save Field: Not Supported 00:17:26.224 Reservations: Not Supported 00:17:26.224 Timestamp: Not Supported 00:17:26.224 Copy: Supported 00:17:26.224 Volatile Write Cache: Present 00:17:26.224 Atomic Write Unit (Normal): 1 00:17:26.224 Atomic Write Unit (PFail): 1 00:17:26.224 Atomic Compare & Write Unit: 1 00:17:26.224 Fused Compare & Write: Supported 00:17:26.224 Scatter-Gather List 00:17:26.224 SGL Command Set: Supported (Dword aligned) 00:17:26.224 SGL Keyed: Not Supported 00:17:26.224 SGL Bit Bucket Descriptor: Not Supported 00:17:26.224 SGL Metadata Pointer: Not Supported 00:17:26.224 Oversized SGL: Not Supported 00:17:26.224 SGL Metadata Address: Not Supported 00:17:26.224 SGL Offset: Not Supported 00:17:26.224 Transport SGL Data Block: Not Supported 00:17:26.224 Replay Protected Memory Block: Not Supported 00:17:26.224 00:17:26.224 Firmware Slot Information 00:17:26.224 ========================= 00:17:26.224 Active slot: 1 00:17:26.224 Slot 1 Firmware Revision: 24.09 00:17:26.224 00:17:26.224 00:17:26.224 Commands Supported and Effects 00:17:26.224 ============================== 00:17:26.224 Admin Commands 00:17:26.224 -------------- 00:17:26.224 Get Log Page (02h): Supported 00:17:26.224 Identify (06h): Supported 00:17:26.224 Abort (08h): Supported 00:17:26.224 Set Features (09h): Supported 00:17:26.224 Get Features (0Ah): Supported 00:17:26.224 Asynchronous Event Request (0Ch): Supported 00:17:26.224 Keep Alive (18h): Supported 00:17:26.224 I/O Commands 00:17:26.224 ------------ 00:17:26.224 Flush (00h): Supported LBA-Change 00:17:26.224 Write (01h): Supported LBA-Change 00:17:26.224 Read (02h): Supported 00:17:26.224 Compare (05h): Supported 00:17:26.224 Write Zeroes (08h): Supported LBA-Change 00:17:26.224 Dataset Management (09h): Supported LBA-Change 00:17:26.224 Copy (19h): Supported LBA-Change 00:17:26.224 00:17:26.224 Error Log 00:17:26.224 ========= 00:17:26.224 00:17:26.224 Arbitration 00:17:26.224 =========== 00:17:26.224 Arbitration Burst: 1 00:17:26.224 00:17:26.224 Power Management 00:17:26.224 ================ 00:17:26.224 Number of Power States: 1 00:17:26.224 Current Power State: Power State #0 00:17:26.224 Power State #0: 00:17:26.224 Max Power: 0.00 W 00:17:26.224 Non-Operational State: Operational 00:17:26.224 Entry Latency: Not Reported 00:17:26.224 Exit Latency: Not Reported 00:17:26.224 Relative Read Throughput: 0 00:17:26.224 Relative Read Latency: 0 00:17:26.224 Relative Write Throughput: 0 00:17:26.224 Relative Write Latency: 0 00:17:26.224 Idle Power: Not Reported 00:17:26.224 Active Power: Not Reported 00:17:26.224 Non-Operational Permissive Mode: Not Supported 00:17:26.224 00:17:26.224 Health Information 00:17:26.224 ================== 00:17:26.224 Critical Warnings: 00:17:26.224 Available Spare Space: OK 00:17:26.224 Temperature: OK 00:17:26.224 Device Reliability: OK 00:17:26.224 Read Only: No 00:17:26.224 Volatile Memory Backup: OK 00:17:26.224 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:26.224 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:26.224 Available Spare: 0% 00:17:26.224 Available Sp[2024-07-26 01:53:08.048266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:26.224 [2024-07-26 01:53:08.056072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:26.224 [2024-07-26 01:53:08.056132] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:26.224 [2024-07-26 01:53:08.056150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.224 [2024-07-26 01:53:08.056161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.224 [2024-07-26 01:53:08.056171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.224 [2024-07-26 01:53:08.056180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.224 [2024-07-26 01:53:08.056244] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:26.224 [2024-07-26 01:53:08.056264] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:26.224 [2024-07-26 01:53:08.057251] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:26.224 [2024-07-26 01:53:08.057338] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:26.224 [2024-07-26 01:53:08.057354] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:26.224 [2024-07-26 01:53:08.058258] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:26.224 [2024-07-26 01:53:08.058282] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:26.224 [2024-07-26 01:53:08.058333] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:26.224 [2024-07-26 01:53:08.059544] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:26.224 are Threshold: 0% 00:17:26.224 Life Percentage Used: 0% 00:17:26.224 Data Units Read: 0 00:17:26.224 Data Units Written: 0 00:17:26.224 Host Read Commands: 0 00:17:26.224 Host Write Commands: 0 00:17:26.224 Controller Busy Time: 0 minutes 00:17:26.224 Power Cycles: 0 00:17:26.224 Power On Hours: 0 hours 00:17:26.224 Unsafe Shutdowns: 0 00:17:26.224 Unrecoverable Media Errors: 0 00:17:26.224 Lifetime Error Log Entries: 0 00:17:26.224 Warning Temperature Time: 0 minutes 00:17:26.224 Critical Temperature Time: 0 minutes 00:17:26.224 00:17:26.224 Number of Queues 00:17:26.224 ================ 00:17:26.224 Number of I/O Submission Queues: 127 00:17:26.224 Number of I/O Completion Queues: 127 00:17:26.224 00:17:26.224 Active Namespaces 00:17:26.224 ================= 00:17:26.224 Namespace ID:1 00:17:26.224 Error Recovery Timeout: Unlimited 00:17:26.224 Command Set Identifier: NVM (00h) 00:17:26.224 Deallocate: Supported 00:17:26.224 Deallocated/Unwritten Error: Not Supported 00:17:26.224 Deallocated Read Value: Unknown 00:17:26.224 Deallocate in Write Zeroes: Not Supported 00:17:26.224 Deallocated Guard Field: 0xFFFF 00:17:26.224 Flush: Supported 00:17:26.224 Reservation: Supported 00:17:26.224 Namespace Sharing Capabilities: Multiple Controllers 00:17:26.224 Size (in LBAs): 131072 (0GiB) 00:17:26.224 Capacity (in LBAs): 131072 (0GiB) 00:17:26.225 Utilization (in LBAs): 131072 (0GiB) 00:17:26.225 NGUID: AAEA8FF68F8545CCB3591C4F549EE06C 00:17:26.225 UUID: aaea8ff6-8f85-45cc-b359-1c4f549ee06c 00:17:26.225 Thin Provisioning: Not Supported 00:17:26.225 Per-NS Atomic Units: Yes 00:17:26.225 Atomic Boundary Size (Normal): 0 00:17:26.225 Atomic Boundary Size (PFail): 0 00:17:26.225 Atomic Boundary Offset: 0 00:17:26.225 Maximum Single Source Range Length: 65535 00:17:26.225 Maximum Copy Length: 65535 00:17:26.225 Maximum Source Range Count: 1 00:17:26.225 NGUID/EUI64 Never Reused: No 00:17:26.225 Namespace Write Protected: No 00:17:26.225 Number of LBA Formats: 1 00:17:26.225 Current LBA Format: LBA Format #00 00:17:26.225 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:26.225 00:17:26.225 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:26.225 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.482 [2024-07-26 01:53:08.287805] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:31.775 Initializing NVMe Controllers 00:17:31.775 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:31.775 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:31.775 Initialization complete. Launching workers. 00:17:31.775 ======================================================== 00:17:31.775 Latency(us) 00:17:31.775 Device Information : IOPS MiB/s Average min max 00:17:31.775 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35179.61 137.42 3637.94 1173.47 10548.16 00:17:31.775 ======================================================== 00:17:31.775 Total : 35179.61 137.42 3637.94 1173.47 10548.16 00:17:31.775 00:17:31.775 [2024-07-26 01:53:13.395433] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:31.775 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:31.775 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.775 [2024-07-26 01:53:13.627142] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:37.039 Initializing NVMe Controllers 00:17:37.039 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:37.039 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:37.039 Initialization complete. Launching workers. 00:17:37.039 ======================================================== 00:17:37.039 Latency(us) 00:17:37.039 Device Information : IOPS MiB/s Average min max 00:17:37.039 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32632.20 127.47 3923.88 1192.30 10286.12 00:17:37.039 ======================================================== 00:17:37.039 Total : 32632.20 127.47 3923.88 1192.30 10286.12 00:17:37.039 00:17:37.039 [2024-07-26 01:53:18.649611] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:37.039 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:37.039 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.039 [2024-07-26 01:53:18.867406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:42.328 [2024-07-26 01:53:24.008214] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:42.328 Initializing NVMe Controllers 00:17:42.328 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:42.328 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:42.328 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:42.328 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:42.328 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:42.328 Initialization complete. Launching workers. 00:17:42.328 Starting thread on core 2 00:17:42.328 Starting thread on core 3 00:17:42.328 Starting thread on core 1 00:17:42.328 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:42.328 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.328 [2024-07-26 01:53:24.312581] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:45.608 [2024-07-26 01:53:27.370551] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:45.608 Initializing NVMe Controllers 00:17:45.608 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:45.608 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:45.608 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:45.608 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:45.608 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:45.608 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:45.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:45.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:45.608 Initialization complete. Launching workers. 00:17:45.608 Starting thread on core 1 with urgent priority queue 00:17:45.608 Starting thread on core 2 with urgent priority queue 00:17:45.608 Starting thread on core 3 with urgent priority queue 00:17:45.608 Starting thread on core 0 with urgent priority queue 00:17:45.608 SPDK bdev Controller (SPDK2 ) core 0: 4996.00 IO/s 20.02 secs/100000 ios 00:17:45.608 SPDK bdev Controller (SPDK2 ) core 1: 5209.33 IO/s 19.20 secs/100000 ios 00:17:45.608 SPDK bdev Controller (SPDK2 ) core 2: 5570.00 IO/s 17.95 secs/100000 ios 00:17:45.608 SPDK bdev Controller (SPDK2 ) core 3: 5717.33 IO/s 17.49 secs/100000 ios 00:17:45.608 ======================================================== 00:17:45.608 00:17:45.608 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:45.608 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.864 [2024-07-26 01:53:27.668597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:45.864 Initializing NVMe Controllers 00:17:45.864 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:45.864 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:45.864 Namespace ID: 1 size: 0GB 00:17:45.864 Initialization complete. 00:17:45.864 INFO: using host memory buffer for IO 00:17:45.864 Hello world! 00:17:45.864 [2024-07-26 01:53:27.678684] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:45.864 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:45.864 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.121 [2024-07-26 01:53:27.955779] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:47.053 Initializing NVMe Controllers 00:17:47.053 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:47.053 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:47.053 Initialization complete. Launching workers. 00:17:47.053 submit (in ns) avg, min, max = 7821.1, 3511.1, 4015831.1 00:17:47.053 complete (in ns) avg, min, max = 25409.1, 2066.7, 4996591.1 00:17:47.053 00:17:47.053 Submit histogram 00:17:47.054 ================ 00:17:47.054 Range in us Cumulative Count 00:17:47.054 3.508 - 3.532: 0.5918% ( 79) 00:17:47.054 3.532 - 3.556: 1.2884% ( 93) 00:17:47.054 3.556 - 3.579: 4.6292% ( 446) 00:17:47.054 3.579 - 3.603: 9.7978% ( 690) 00:17:47.054 3.603 - 3.627: 17.6629% ( 1050) 00:17:47.054 3.627 - 3.650: 26.3221% ( 1156) 00:17:47.054 3.650 - 3.674: 34.8764% ( 1142) 00:17:47.054 3.674 - 3.698: 42.8390% ( 1063) 00:17:47.054 3.698 - 3.721: 51.5805% ( 1167) 00:17:47.054 3.721 - 3.745: 57.5206% ( 793) 00:17:47.054 3.745 - 3.769: 61.9551% ( 592) 00:17:47.054 3.769 - 3.793: 65.8277% ( 517) 00:17:47.054 3.793 - 3.816: 69.5206% ( 493) 00:17:47.054 3.816 - 3.840: 73.0712% ( 474) 00:17:47.054 3.840 - 3.864: 76.7191% ( 487) 00:17:47.054 3.864 - 3.887: 80.1423% ( 457) 00:17:47.054 3.887 - 3.911: 83.5206% ( 451) 00:17:47.054 3.911 - 3.935: 86.5543% ( 405) 00:17:47.054 3.935 - 3.959: 88.8015% ( 300) 00:17:47.054 3.959 - 3.982: 90.5243% ( 230) 00:17:47.054 3.982 - 4.006: 92.0375% ( 202) 00:17:47.054 4.006 - 4.030: 93.1685% ( 151) 00:17:47.054 4.030 - 4.053: 94.2097% ( 139) 00:17:47.054 4.053 - 4.077: 95.0337% ( 110) 00:17:47.054 4.077 - 4.101: 95.6479% ( 82) 00:17:47.054 4.101 - 4.124: 96.0824% ( 58) 00:17:47.054 4.124 - 4.148: 96.5019% ( 56) 00:17:47.054 4.148 - 4.172: 96.7491% ( 33) 00:17:47.054 4.172 - 4.196: 96.9064% ( 21) 00:17:47.054 4.196 - 4.219: 97.0787% ( 23) 00:17:47.054 4.219 - 4.243: 97.1835% ( 14) 00:17:47.054 4.243 - 4.267: 97.2884% ( 14) 00:17:47.054 4.267 - 4.290: 97.4082% ( 16) 00:17:47.054 4.290 - 4.314: 97.4831% ( 10) 00:17:47.054 4.314 - 4.338: 97.5356% ( 7) 00:17:47.054 4.338 - 4.361: 97.6030% ( 9) 00:17:47.054 4.361 - 4.385: 97.6554% ( 7) 00:17:47.054 4.385 - 4.409: 97.6854% ( 4) 00:17:47.054 4.409 - 4.433: 97.6929% ( 1) 00:17:47.054 4.433 - 4.456: 97.7154% ( 3) 00:17:47.054 4.456 - 4.480: 97.7228% ( 1) 00:17:47.054 4.480 - 4.504: 97.7303% ( 1) 00:17:47.054 4.527 - 4.551: 97.7453% ( 2) 00:17:47.054 4.717 - 4.741: 97.7753% ( 4) 00:17:47.054 4.764 - 4.788: 97.8127% ( 5) 00:17:47.054 4.788 - 4.812: 97.8202% ( 1) 00:17:47.054 4.812 - 4.836: 97.8502% ( 4) 00:17:47.054 4.836 - 4.859: 97.9101% ( 8) 00:17:47.054 4.859 - 4.883: 97.9625% ( 7) 00:17:47.054 4.883 - 4.907: 97.9850% ( 3) 00:17:47.054 4.907 - 4.930: 98.0150% ( 4) 00:17:47.054 4.930 - 4.954: 98.0449% ( 4) 00:17:47.054 4.954 - 4.978: 98.0974% ( 7) 00:17:47.054 4.978 - 5.001: 98.1199% ( 3) 00:17:47.054 5.001 - 5.025: 98.1723% ( 7) 00:17:47.054 5.025 - 5.049: 98.2247% ( 7) 00:17:47.054 5.049 - 5.073: 98.2397% ( 2) 00:17:47.054 5.073 - 5.096: 98.2547% ( 2) 00:17:47.054 5.096 - 5.120: 98.2697% ( 2) 00:17:47.054 5.120 - 5.144: 98.2846% ( 2) 00:17:47.054 5.167 - 5.191: 98.2921% ( 1) 00:17:47.054 5.191 - 5.215: 98.3146% ( 3) 00:17:47.054 5.215 - 5.239: 98.3221% ( 1) 00:17:47.054 5.239 - 5.262: 98.3670% ( 6) 00:17:47.054 5.262 - 5.286: 98.3745% ( 1) 00:17:47.054 5.333 - 5.357: 98.3895% ( 2) 00:17:47.054 5.357 - 5.381: 98.3970% ( 1) 00:17:47.054 5.404 - 5.428: 98.4120% ( 2) 00:17:47.054 5.428 - 5.452: 98.4195% ( 1) 00:17:47.054 5.452 - 5.476: 98.4345% ( 2) 00:17:47.054 5.641 - 5.665: 98.4419% ( 1) 00:17:47.054 6.163 - 6.210: 98.4569% ( 2) 00:17:47.054 6.210 - 6.258: 98.4644% ( 1) 00:17:47.054 6.400 - 6.447: 98.4719% ( 1) 00:17:47.054 6.447 - 6.495: 98.4869% ( 2) 00:17:47.054 6.495 - 6.542: 98.4944% ( 1) 00:17:47.054 6.542 - 6.590: 98.5019% ( 1) 00:17:47.054 6.637 - 6.684: 98.5094% ( 1) 00:17:47.054 6.779 - 6.827: 98.5169% ( 1) 00:17:47.054 6.827 - 6.874: 98.5393% ( 3) 00:17:47.054 6.874 - 6.921: 98.5543% ( 2) 00:17:47.054 7.111 - 7.159: 98.5693% ( 2) 00:17:47.054 7.159 - 7.206: 98.5768% ( 1) 00:17:47.054 7.206 - 7.253: 98.5918% ( 2) 00:17:47.054 7.301 - 7.348: 98.6142% ( 3) 00:17:47.054 7.348 - 7.396: 98.6217% ( 1) 00:17:47.054 7.490 - 7.538: 98.6292% ( 1) 00:17:47.054 7.538 - 7.585: 98.6442% ( 2) 00:17:47.054 7.585 - 7.633: 98.6592% ( 2) 00:17:47.054 7.680 - 7.727: 98.6816% ( 3) 00:17:47.054 7.727 - 7.775: 98.6966% ( 2) 00:17:47.054 7.870 - 7.917: 98.7041% ( 1) 00:17:47.054 7.917 - 7.964: 98.7191% ( 2) 00:17:47.054 8.012 - 8.059: 98.7266% ( 1) 00:17:47.054 8.201 - 8.249: 98.7341% ( 1) 00:17:47.054 8.249 - 8.296: 98.7416% ( 1) 00:17:47.054 8.296 - 8.344: 98.7566% ( 2) 00:17:47.054 8.344 - 8.391: 98.7640% ( 1) 00:17:47.054 8.486 - 8.533: 98.7715% ( 1) 00:17:47.054 8.533 - 8.581: 98.7865% ( 2) 00:17:47.054 8.676 - 8.723: 98.7940% ( 1) 00:17:47.054 8.723 - 8.770: 98.8015% ( 1) 00:17:47.054 8.770 - 8.818: 98.8090% ( 1) 00:17:47.054 8.960 - 9.007: 98.8165% ( 1) 00:17:47.054 9.150 - 9.197: 98.8240% ( 1) 00:17:47.054 9.387 - 9.434: 98.8315% ( 1) 00:17:47.054 10.050 - 10.098: 98.8390% ( 1) 00:17:47.054 10.098 - 10.145: 98.8464% ( 1) 00:17:47.054 10.240 - 10.287: 98.8539% ( 1) 00:17:47.054 10.382 - 10.430: 98.8614% ( 1) 00:17:47.054 10.430 - 10.477: 98.8689% ( 1) 00:17:47.054 10.999 - 11.046: 98.8839% ( 2) 00:17:47.054 11.236 - 11.283: 98.8989% ( 2) 00:17:47.054 11.283 - 11.330: 98.9064% ( 1) 00:17:47.054 11.330 - 11.378: 98.9139% ( 1) 00:17:47.054 11.899 - 11.947: 98.9213% ( 1) 00:17:47.054 11.994 - 12.041: 98.9288% ( 1) 00:17:47.054 12.136 - 12.231: 98.9363% ( 1) 00:17:47.054 12.231 - 12.326: 98.9438% ( 1) 00:17:47.054 12.990 - 13.084: 98.9513% ( 1) 00:17:47.054 13.369 - 13.464: 98.9588% ( 1) 00:17:47.054 13.464 - 13.559: 98.9663% ( 1) 00:17:47.054 13.653 - 13.748: 98.9738% ( 1) 00:17:47.054 13.748 - 13.843: 98.9813% ( 1) 00:17:47.054 13.938 - 14.033: 98.9888% ( 1) 00:17:47.054 14.412 - 14.507: 98.9963% ( 1) 00:17:47.054 14.507 - 14.601: 99.0037% ( 1) 00:17:47.054 14.886 - 14.981: 99.0112% ( 1) 00:17:47.054 17.161 - 17.256: 99.0187% ( 1) 00:17:47.054 17.256 - 17.351: 99.0487% ( 4) 00:17:47.054 17.351 - 17.446: 99.0861% ( 5) 00:17:47.054 17.446 - 17.541: 99.1311% ( 6) 00:17:47.054 17.541 - 17.636: 99.1536% ( 3) 00:17:47.054 17.636 - 17.730: 99.1685% ( 2) 00:17:47.054 17.730 - 17.825: 99.1835% ( 2) 00:17:47.054 17.825 - 17.920: 99.2210% ( 5) 00:17:47.054 17.920 - 18.015: 99.2360% ( 2) 00:17:47.054 18.015 - 18.110: 99.2584% ( 3) 00:17:47.054 18.110 - 18.204: 99.3558% ( 13) 00:17:47.054 18.204 - 18.299: 99.4307% ( 10) 00:17:47.054 18.299 - 18.394: 99.4906% ( 8) 00:17:47.054 18.394 - 18.489: 99.5581% ( 9) 00:17:47.054 18.489 - 18.584: 99.5880% ( 4) 00:17:47.054 18.584 - 18.679: 99.6404% ( 7) 00:17:47.054 18.679 - 18.773: 99.6704% ( 4) 00:17:47.054 18.868 - 18.963: 99.6854% ( 2) 00:17:47.054 18.963 - 19.058: 99.7004% ( 2) 00:17:47.054 19.058 - 19.153: 99.7079% ( 1) 00:17:47.054 19.247 - 19.342: 99.7228% ( 2) 00:17:47.054 19.342 - 19.437: 99.7453% ( 3) 00:17:47.054 19.437 - 19.532: 99.7753% ( 4) 00:17:47.054 19.532 - 19.627: 99.7828% ( 1) 00:17:47.054 19.627 - 19.721: 99.8127% ( 4) 00:17:47.054 19.721 - 19.816: 99.8352% ( 3) 00:17:47.054 19.816 - 19.911: 99.8427% ( 1) 00:17:47.054 19.911 - 20.006: 99.8502% ( 1) 00:17:47.054 20.196 - 20.290: 99.8577% ( 1) 00:17:47.054 20.290 - 20.385: 99.8652% ( 1) 00:17:47.054 21.902 - 21.997: 99.8727% ( 1) 00:17:47.054 22.661 - 22.756: 99.8801% ( 1) 00:17:47.054 22.850 - 22.945: 99.8876% ( 1) 00:17:47.054 25.600 - 25.790: 99.8951% ( 1) 00:17:47.054 29.772 - 29.961: 99.9026% ( 1) 00:17:47.054 3980.705 - 4004.978: 99.9625% ( 8) 00:17:47.054 4004.978 - 4029.250: 100.0000% ( 5) 00:17:47.054 00:17:47.054 Complete histogram 00:17:47.054 ================== 00:17:47.054 Range in us Cumulative Count 00:17:47.054 2.062 - 2.074: 1.4232% ( 190) 00:17:47.054 2.074 - 2.086: 36.7865% ( 4721) 00:17:47.054 2.086 - 2.098: 48.7790% ( 1601) 00:17:47.054 2.098 - 2.110: 51.2210% ( 326) 00:17:47.054 2.110 - 2.121: 60.6891% ( 1264) 00:17:47.054 2.121 - 2.133: 62.9963% ( 308) 00:17:47.055 2.133 - 2.145: 67.4607% ( 596) 00:17:47.055 2.145 - 2.157: 79.9026% ( 1661) 00:17:47.055 2.157 - 2.169: 81.8727% ( 263) 00:17:47.055 2.169 - 2.181: 84.0899% ( 296) 00:17:47.055 2.181 - 2.193: 87.9026% ( 509) 00:17:47.055 2.193 - 2.204: 88.9888% ( 145) 00:17:47.055 2.204 - 2.216: 89.8427% ( 114) 00:17:47.055 2.216 - 2.228: 92.3745% ( 338) 00:17:47.055 2.228 - 2.240: 94.5169% ( 286) 00:17:47.055 2.240 - 2.252: 95.0187% ( 67) 00:17:47.055 2.252 - 2.264: 95.4906% ( 63) 00:17:47.055 2.264 - 2.276: 95.6180% ( 17) 00:17:47.055 2.276 - 2.287: 95.6854% ( 9) 00:17:47.055 2.287 - 2.299: 95.8652% ( 24) 00:17:47.055 2.299 - 2.311: 96.1423% ( 37) 00:17:47.055 2.311 - 2.323: 96.2697% ( 17) 00:17:47.055 2.323 - 2.335: 96.3596% ( 12) 00:17:47.055 2.335 - 2.347: 96.5393% ( 24) 00:17:47.055 2.347 - 2.359: 96.8315% ( 39) 00:17:47.055 2.359 - 2.370: 97.2734% ( 59) 00:17:47.055 2.370 - 2.382: 97.6554% ( 51) 00:17:47.055 2.382 - 2.394: 97.9251% ( 36) 00:17:47.055 2.394 - 2.406: 98.1723% ( 33) 00:17:47.055 2.406 - 2.418: 98.2996% ( 17) 00:17:47.055 2.418 - 2.430: 98.3820% ( 11) 00:17:47.055 2.430 - 2.441: 98.4794% ( 13) 00:17:47.055 2.441 - 2.453: 98.5094% ( 4) 00:17:47.055 2.453 - 2.465: 98.5318% ( 3) 00:17:47.055 2.465 - 2.477: 98.5468% ( 2) 00:17:47.055 2.477 - 2.489: 98.5768% ( 4) 00:17:47.055 2.489 - 2.501: 98.5993% ( 3) 00:17:47.055 2.536 - 2.548: 98.6142% ( 2) 00:17:47.055 2.548 - 2.560: 98.6217% ( 1) 00:17:47.055 2.560 - 2.572: 98.6292% ( 1) 00:17:47.055 2.572 - 2.584: 98.6367% ( 1) 00:17:47.055 2.596 - 2.607: 98.6442% ( 1) 00:17:47.055 2.607 - 2.619: 98.6517% ( 1) 00:17:47.055 2.631 - 2.643: 98.6592% ( 1) 00:17:47.055 2.679 - 2.690: 98.6667% ( 1) 00:17:47.055 2.690 - 2.702: 98.6742% ( 1) 00:17:47.055 2.761 - 2.773: 98.6816% ( 1) 00:17:47.055 3.271 - 3.295: 98.6891% ( 1) 00:17:47.055 3.295 - 3.319: 98.7116% ( 3) 00:17:47.055 3.342 - 3.366: 98.7191% ( 1) 00:17:47.055 3.366 - 3.390: 98.7266% ( 1) 00:17:47.055 3.390 - 3.413: 98.7341% ( 1) 00:17:47.055 3.413 - 3.437: 98.7715% ( 5) 00:17:47.055 3.461 - 3.484: 98.7790% ( 1) 00:17:47.055 3.484 - 3.508: 98.7865% ( 1) 00:17:47.055 3.508 - 3.532: 98.7940% ( 1) 00:17:47.055 3.556 - 3.579: 98.8015% ( 1) 00:17:47.055 3.579 - 3.603: 98.8090% ( 1) 00:17:47.055 3.603 - 3.627: 98.8165% ( 1) 00:17:47.055 3.674 - 3.698: 98.8240% ( 1) 00:17:47.055 3.769 - 3.793: 98.8315% ( 1) 00:17:47.055 3.793 - 3.816: 98.8390% ( 1) 00:17:47.055 4.006 - 4.030: 98.8464% ( 1) 00:17:47.055 4.314 - 4.338: 98.8539% ( 1) 00:17:47.055 4.599 - 4.622: 98.8614% ( 1) 00:17:47.055 5.144 - 5.167: 98.8689% ( 1) 00:17:47.055 5.452 - 5.476: 98.8764% ( 1) 00:17:47.055 5.499 - 5.523: 98.8839% ( 1) 00:17:47.055 5.689 - 5.713: 98.8914% ( 1) 00:17:47.055 5.902 - 5.926: 98.9064% ( 2) 00:17:47.055 6.021 - 6.044: 98.9139% ( 1) 00:17:47.055 6.068 - 6.116: 98.9213% ( 1) 00:17:47.055 6.116 - 6.163: 98.9288% ( 1) 00:17:47.055 6.258 - 6.305: 98.9363% ( 1) 00:17:47.055 6.353 - 6.400: 98.9438% ( 1) 00:17:47.055 6.400 - 6.447: 98.9513% ( 1) 00:17:47.055 6.542 - 6.590: 98.9588% ( 1) 00:17:47.055 6.590 - 6.637: 98.9663% ( 1) 00:17:47.055 6.637 - 6.684: 98.9738% ( 1) 00:17:47.055 6.921 - 6.969: 98.9888% ( 2) 00:17:47.055 7.253 - 7.301: 98.9963% ( 1) 00:17:47.055 7.775 - 7.822: 99.0037% ( 1) 00:17:47.055 15.360 - 15.455: 99.0112% ( 1) 00:17:47.055 15.644 - 15.739: 99.0187% ( 1) 00:17:47.055 15.739 - 15.834: 99.0262% ( 1) 00:17:47.055 15.834 - 15.929: 99.0487% ( 3) 00:17:47.055 15.929 - 16.024: 9[2024-07-26 01:53:29.049962] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:47.312 9.0562% ( 1) 00:17:47.312 16.024 - 16.119: 99.0787% ( 3) 00:17:47.312 16.119 - 16.213: 99.1011% ( 3) 00:17:47.312 16.213 - 16.308: 99.1386% ( 5) 00:17:47.312 16.403 - 16.498: 99.1610% ( 3) 00:17:47.312 16.498 - 16.593: 99.2135% ( 7) 00:17:47.312 16.593 - 16.687: 99.2360% ( 3) 00:17:47.312 16.687 - 16.782: 99.2584% ( 3) 00:17:47.312 16.782 - 16.877: 99.2809% ( 3) 00:17:47.312 16.877 - 16.972: 99.3034% ( 3) 00:17:47.312 16.972 - 17.067: 99.3258% ( 3) 00:17:47.312 17.067 - 17.161: 99.3408% ( 2) 00:17:47.312 17.161 - 17.256: 99.3483% ( 1) 00:17:47.312 17.351 - 17.446: 99.3633% ( 2) 00:17:47.312 17.446 - 17.541: 99.3708% ( 1) 00:17:47.312 17.541 - 17.636: 99.3858% ( 2) 00:17:47.312 17.730 - 17.825: 99.3933% ( 1) 00:17:47.312 17.920 - 18.015: 99.4007% ( 1) 00:17:47.312 18.489 - 18.584: 99.4082% ( 1) 00:17:47.312 25.979 - 26.169: 99.4157% ( 1) 00:17:47.312 1784.036 - 1796.172: 99.4232% ( 1) 00:17:47.312 3021.938 - 3034.074: 99.4307% ( 1) 00:17:47.312 3980.705 - 4004.978: 99.8652% ( 58) 00:17:47.312 4004.978 - 4029.250: 99.9850% ( 16) 00:17:47.312 4102.068 - 4126.341: 99.9925% ( 1) 00:17:47.312 4975.881 - 5000.154: 100.0000% ( 1) 00:17:47.312 00:17:47.312 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:47.312 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:47.312 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:47.312 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:47.312 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:47.570 [ 00:17:47.570 { 00:17:47.570 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:47.570 "subtype": "Discovery", 00:17:47.570 "listen_addresses": [], 00:17:47.570 "allow_any_host": true, 00:17:47.570 "hosts": [] 00:17:47.570 }, 00:17:47.570 { 00:17:47.570 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:47.570 "subtype": "NVMe", 00:17:47.570 "listen_addresses": [ 00:17:47.570 { 00:17:47.570 "trtype": "VFIOUSER", 00:17:47.570 "adrfam": "IPv4", 00:17:47.570 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:47.570 "trsvcid": "0" 00:17:47.570 } 00:17:47.570 ], 00:17:47.570 "allow_any_host": true, 00:17:47.570 "hosts": [], 00:17:47.570 "serial_number": "SPDK1", 00:17:47.570 "model_number": "SPDK bdev Controller", 00:17:47.570 "max_namespaces": 32, 00:17:47.570 "min_cntlid": 1, 00:17:47.570 "max_cntlid": 65519, 00:17:47.570 "namespaces": [ 00:17:47.570 { 00:17:47.570 "nsid": 1, 00:17:47.570 "bdev_name": "Malloc1", 00:17:47.570 "name": "Malloc1", 00:17:47.570 "nguid": "192DC55AFED34CB5BFE2A731A920B504", 00:17:47.570 "uuid": "192dc55a-fed3-4cb5-bfe2-a731a920b504" 00:17:47.570 }, 00:17:47.570 { 00:17:47.570 "nsid": 2, 00:17:47.570 "bdev_name": "Malloc3", 00:17:47.570 "name": "Malloc3", 00:17:47.570 "nguid": "BE43A7D764DD432C972B84DE63D9BC2C", 00:17:47.570 "uuid": "be43a7d7-64dd-432c-972b-84de63d9bc2c" 00:17:47.570 } 00:17:47.570 ] 00:17:47.570 }, 00:17:47.570 { 00:17:47.570 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:47.570 "subtype": "NVMe", 00:17:47.570 "listen_addresses": [ 00:17:47.570 { 00:17:47.570 "trtype": "VFIOUSER", 00:17:47.570 "adrfam": "IPv4", 00:17:47.570 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:47.570 "trsvcid": "0" 00:17:47.570 } 00:17:47.570 ], 00:17:47.570 "allow_any_host": true, 00:17:47.570 "hosts": [], 00:17:47.570 "serial_number": "SPDK2", 00:17:47.570 "model_number": "SPDK bdev Controller", 00:17:47.570 "max_namespaces": 32, 00:17:47.570 "min_cntlid": 1, 00:17:47.570 "max_cntlid": 65519, 00:17:47.570 "namespaces": [ 00:17:47.570 { 00:17:47.570 "nsid": 1, 00:17:47.570 "bdev_name": "Malloc2", 00:17:47.570 "name": "Malloc2", 00:17:47.570 "nguid": "AAEA8FF68F8545CCB3591C4F549EE06C", 00:17:47.570 "uuid": "aaea8ff6-8f85-45cc-b359-1c4f549ee06c" 00:17:47.570 } 00:17:47.570 ] 00:17:47.570 } 00:17:47.570 ] 00:17:47.570 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:47.570 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2264665 00:17:47.570 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:47.570 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:47.570 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:47.570 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:47.570 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:47.570 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:47.570 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:47.570 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:47.570 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.570 [2024-07-26 01:53:29.501498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:47.828 Malloc4 00:17:47.828 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:48.086 [2024-07-26 01:53:29.861124] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:48.086 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:48.086 Asynchronous Event Request test 00:17:48.086 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:48.086 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:48.086 Registering asynchronous event callbacks... 00:17:48.086 Starting namespace attribute notice tests for all controllers... 00:17:48.086 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:48.086 aer_cb - Changed Namespace 00:17:48.086 Cleaning up... 00:17:48.343 [ 00:17:48.343 { 00:17:48.343 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:48.343 "subtype": "Discovery", 00:17:48.343 "listen_addresses": [], 00:17:48.343 "allow_any_host": true, 00:17:48.343 "hosts": [] 00:17:48.343 }, 00:17:48.343 { 00:17:48.343 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:48.343 "subtype": "NVMe", 00:17:48.343 "listen_addresses": [ 00:17:48.343 { 00:17:48.343 "trtype": "VFIOUSER", 00:17:48.343 "adrfam": "IPv4", 00:17:48.343 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:48.343 "trsvcid": "0" 00:17:48.343 } 00:17:48.343 ], 00:17:48.343 "allow_any_host": true, 00:17:48.343 "hosts": [], 00:17:48.343 "serial_number": "SPDK1", 00:17:48.343 "model_number": "SPDK bdev Controller", 00:17:48.343 "max_namespaces": 32, 00:17:48.343 "min_cntlid": 1, 00:17:48.343 "max_cntlid": 65519, 00:17:48.343 "namespaces": [ 00:17:48.343 { 00:17:48.343 "nsid": 1, 00:17:48.343 "bdev_name": "Malloc1", 00:17:48.343 "name": "Malloc1", 00:17:48.343 "nguid": "192DC55AFED34CB5BFE2A731A920B504", 00:17:48.343 "uuid": "192dc55a-fed3-4cb5-bfe2-a731a920b504" 00:17:48.344 }, 00:17:48.344 { 00:17:48.344 "nsid": 2, 00:17:48.344 "bdev_name": "Malloc3", 00:17:48.344 "name": "Malloc3", 00:17:48.344 "nguid": "BE43A7D764DD432C972B84DE63D9BC2C", 00:17:48.344 "uuid": "be43a7d7-64dd-432c-972b-84de63d9bc2c" 00:17:48.344 } 00:17:48.344 ] 00:17:48.344 }, 00:17:48.344 { 00:17:48.344 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:48.344 "subtype": "NVMe", 00:17:48.344 "listen_addresses": [ 00:17:48.344 { 00:17:48.344 "trtype": "VFIOUSER", 00:17:48.344 "adrfam": "IPv4", 00:17:48.344 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:48.344 "trsvcid": "0" 00:17:48.344 } 00:17:48.344 ], 00:17:48.344 "allow_any_host": true, 00:17:48.344 "hosts": [], 00:17:48.344 "serial_number": "SPDK2", 00:17:48.344 "model_number": "SPDK bdev Controller", 00:17:48.344 "max_namespaces": 32, 00:17:48.344 "min_cntlid": 1, 00:17:48.344 "max_cntlid": 65519, 00:17:48.344 "namespaces": [ 00:17:48.344 { 00:17:48.344 "nsid": 1, 00:17:48.344 "bdev_name": "Malloc2", 00:17:48.344 "name": "Malloc2", 00:17:48.344 "nguid": "AAEA8FF68F8545CCB3591C4F549EE06C", 00:17:48.344 "uuid": "aaea8ff6-8f85-45cc-b359-1c4f549ee06c" 00:17:48.344 }, 00:17:48.344 { 00:17:48.344 "nsid": 2, 00:17:48.344 "bdev_name": "Malloc4", 00:17:48.344 "name": "Malloc4", 00:17:48.344 "nguid": "55847F7D8E9E4E4C84DD7283A4880CF4", 00:17:48.344 "uuid": "55847f7d-8e9e-4e4c-84dd-7283a4880cf4" 00:17:48.344 } 00:17:48.344 ] 00:17:48.344 } 00:17:48.344 ] 00:17:48.344 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2264665 00:17:48.344 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:48.344 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2259091 00:17:48.344 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2259091 ']' 00:17:48.344 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2259091 00:17:48.344 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:48.344 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:48.344 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2259091 00:17:48.344 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:48.344 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:48.344 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2259091' 00:17:48.344 killing process with pid 2259091 00:17:48.344 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2259091 00:17:48.344 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2259091 00:17:48.601 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:48.601 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:48.601 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:48.601 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:48.601 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:48.601 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2264815 00:17:48.601 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:48.601 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2264815' 00:17:48.601 Process pid: 2264815 00:17:48.601 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:48.601 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2264815 00:17:48.601 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2264815 ']' 00:17:48.601 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.601 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:48.601 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.601 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:48.601 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:48.601 [2024-07-26 01:53:30.543545] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:48.601 [2024-07-26 01:53:30.544538] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:17:48.601 [2024-07-26 01:53:30.544592] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.601 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.601 [2024-07-26 01:53:30.603524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:48.859 [2024-07-26 01:53:30.689287] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.859 [2024-07-26 01:53:30.689353] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.859 [2024-07-26 01:53:30.689367] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.859 [2024-07-26 01:53:30.689386] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.859 [2024-07-26 01:53:30.689409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.859 [2024-07-26 01:53:30.689500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.859 [2024-07-26 01:53:30.689566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.859 [2024-07-26 01:53:30.689632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.859 [2024-07-26 01:53:30.689634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.859 [2024-07-26 01:53:30.785183] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:48.859 [2024-07-26 01:53:30.785435] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:48.859 [2024-07-26 01:53:30.785719] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:48.859 [2024-07-26 01:53:30.786347] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:48.859 [2024-07-26 01:53:30.786601] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:48.859 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.859 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:48.859 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:50.231 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:50.231 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:50.231 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:50.231 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:50.231 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:50.231 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:50.490 Malloc1 00:17:50.490 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:50.748 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:51.006 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:51.264 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:51.264 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:51.264 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:51.524 Malloc2 00:17:51.524 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:51.781 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:52.039 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:52.297 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:52.297 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2264815 00:17:52.297 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2264815 ']' 00:17:52.297 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2264815 00:17:52.297 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:52.297 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:52.297 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2264815 00:17:52.297 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:52.297 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:52.297 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2264815' 00:17:52.297 killing process with pid 2264815 00:17:52.297 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2264815 00:17:52.297 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2264815 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:52.555 00:17:52.555 real 0m52.318s 00:17:52.555 user 3m26.729s 00:17:52.555 sys 0m4.306s 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:52.555 ************************************ 00:17:52.555 END TEST nvmf_vfio_user 00:17:52.555 ************************************ 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:52.555 ************************************ 00:17:52.555 START TEST nvmf_vfio_user_nvme_compliance 00:17:52.555 ************************************ 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:52.555 * Looking for test storage... 00:17:52.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2265293 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2265293' 00:17:52.555 Process pid: 2265293 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2265293 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2265293 ']' 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.555 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:52.556 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.556 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:52.556 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:52.814 [2024-07-26 01:53:34.583081] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:17:52.814 [2024-07-26 01:53:34.583161] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.814 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.814 [2024-07-26 01:53:34.641680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:52.814 [2024-07-26 01:53:34.727272] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.814 [2024-07-26 01:53:34.727359] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.814 [2024-07-26 01:53:34.727373] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.814 [2024-07-26 01:53:34.727384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.814 [2024-07-26 01:53:34.727394] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.814 [2024-07-26 01:53:34.727507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.814 [2024-07-26 01:53:34.727574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.814 [2024-07-26 01:53:34.727576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.072 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:53.072 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:17:53.072 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:54.010 malloc0 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.010 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:54.010 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.289 00:17:54.289 00:17:54.289 CUnit - A unit testing framework for C - Version 2.1-3 00:17:54.289 http://cunit.sourceforge.net/ 00:17:54.289 00:17:54.289 00:17:54.289 Suite: nvme_compliance 00:17:54.289 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-26 01:53:36.074621] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.289 [2024-07-26 01:53:36.076112] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:54.289 [2024-07-26 01:53:36.076139] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:54.289 [2024-07-26 01:53:36.076151] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:54.289 [2024-07-26 01:53:36.080652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.289 passed 00:17:54.289 Test: admin_identify_ctrlr_verify_fused ...[2024-07-26 01:53:36.165269] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.289 [2024-07-26 01:53:36.168290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.289 passed 00:17:54.290 Test: admin_identify_ns ...[2024-07-26 01:53:36.251609] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.556 [2024-07-26 01:53:36.311094] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:54.556 [2024-07-26 01:53:36.319078] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:54.556 [2024-07-26 01:53:36.343234] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.556 passed 00:17:54.556 Test: admin_get_features_mandatory_features ...[2024-07-26 01:53:36.422787] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.556 [2024-07-26 01:53:36.425805] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.556 passed 00:17:54.556 Test: admin_get_features_optional_features ...[2024-07-26 01:53:36.511358] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.556 [2024-07-26 01:53:36.514387] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.556 passed 00:17:54.814 Test: admin_set_features_number_of_queues ...[2024-07-26 01:53:36.597612] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.814 [2024-07-26 01:53:36.700185] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.814 passed 00:17:54.814 Test: admin_get_log_page_mandatory_logs ...[2024-07-26 01:53:36.782723] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.814 [2024-07-26 01:53:36.785745] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.814 passed 00:17:55.072 Test: admin_get_log_page_with_lpo ...[2024-07-26 01:53:36.867883] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.072 [2024-07-26 01:53:36.936073] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:55.072 [2024-07-26 01:53:36.949156] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.072 passed 00:17:55.072 Test: fabric_property_get ...[2024-07-26 01:53:37.031662] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.072 [2024-07-26 01:53:37.032930] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:55.072 [2024-07-26 01:53:37.034684] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.072 passed 00:17:55.330 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-26 01:53:37.117215] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.330 [2024-07-26 01:53:37.118526] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:55.330 [2024-07-26 01:53:37.120238] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.330 passed 00:17:55.330 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-26 01:53:37.206407] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.330 [2024-07-26 01:53:37.290071] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:55.330 [2024-07-26 01:53:37.306070] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:55.330 [2024-07-26 01:53:37.311180] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.588 passed 00:17:55.588 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-26 01:53:37.395947] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.588 [2024-07-26 01:53:37.397237] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:55.588 [2024-07-26 01:53:37.398969] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.588 passed 00:17:55.588 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-26 01:53:37.479113] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.588 [2024-07-26 01:53:37.557072] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:55.588 [2024-07-26 01:53:37.581067] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:55.588 [2024-07-26 01:53:37.586179] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.846 passed 00:17:55.846 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-26 01:53:37.669165] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.846 [2024-07-26 01:53:37.670478] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:55.846 [2024-07-26 01:53:37.670530] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:55.846 [2024-07-26 01:53:37.672191] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.846 passed 00:17:55.846 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-26 01:53:37.755301] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.846 [2024-07-26 01:53:37.841084] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:55.846 [2024-07-26 01:53:37.849072] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:55.846 [2024-07-26 01:53:37.857096] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:56.104 [2024-07-26 01:53:37.865083] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:56.104 [2024-07-26 01:53:37.894168] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.104 passed 00:17:56.104 Test: admin_create_io_sq_verify_pc ...[2024-07-26 01:53:37.976676] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.104 [2024-07-26 01:53:37.993083] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:56.104 [2024-07-26 01:53:38.011085] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.104 passed 00:17:56.104 Test: admin_create_io_qp_max_qps ...[2024-07-26 01:53:38.091643] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:57.477 [2024-07-26 01:53:39.193077] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:57.734 [2024-07-26 01:53:39.579617] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:57.734 passed 00:17:57.734 Test: admin_create_io_sq_shared_cq ...[2024-07-26 01:53:39.660815] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:57.992 [2024-07-26 01:53:39.792066] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:57.992 [2024-07-26 01:53:39.829158] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:57.992 passed 00:17:57.992 00:17:57.992 Run Summary: Type Total Ran Passed Failed Inactive 00:17:57.992 suites 1 1 n/a 0 0 00:17:57.992 tests 18 18 18 0 0 00:17:57.992 asserts 360 360 360 0 n/a 00:17:57.992 00:17:57.992 Elapsed time = 1.555 seconds 00:17:57.992 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2265293 00:17:57.992 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2265293 ']' 00:17:57.992 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2265293 00:17:57.992 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:17:57.992 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.992 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2265293 00:17:57.992 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:57.992 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:57.992 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2265293' 00:17:57.992 killing process with pid 2265293 00:17:57.992 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2265293 00:17:57.992 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2265293 00:17:58.250 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:58.250 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:58.250 00:17:58.250 real 0m5.702s 00:17:58.250 user 0m16.034s 00:17:58.250 sys 0m0.550s 00:17:58.250 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:58.250 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:58.251 ************************************ 00:17:58.251 END TEST nvmf_vfio_user_nvme_compliance 00:17:58.251 ************************************ 00:17:58.251 01:53:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:58.251 01:53:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:58.251 01:53:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:58.251 01:53:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:58.251 ************************************ 00:17:58.251 START TEST nvmf_vfio_user_fuzz 00:17:58.251 ************************************ 00:17:58.251 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:58.251 * Looking for test storage... 00:17:58.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:58.251 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.251 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:58.251 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.251 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.251 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.251 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.251 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.251 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.251 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.251 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.251 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.509 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.509 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.509 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.509 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.509 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.509 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.509 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.509 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.509 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.509 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.509 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.509 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2266011 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2266011' 00:17:58.510 Process pid: 2266011 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2266011 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2266011 ']' 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:58.510 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:58.768 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.768 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:17:58.768 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:59.701 malloc0 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:59.701 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:31.761 Fuzzing completed. Shutting down the fuzz application 00:18:31.761 00:18:31.761 Dumping successful admin opcodes: 00:18:31.761 8, 9, 10, 24, 00:18:31.761 Dumping successful io opcodes: 00:18:31.761 0, 00:18:31.761 NS: 0x200003a1ef00 I/O qp, Total commands completed: 568501, total successful commands: 2185, random_seed: 1163971968 00:18:31.761 NS: 0x200003a1ef00 admin qp, Total commands completed: 72768, total successful commands: 572, random_seed: 2730654656 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2266011 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2266011 ']' 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2266011 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2266011 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2266011' 00:18:31.761 killing process with pid 2266011 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2266011 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2266011 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:31.761 00:18:31.761 real 0m32.177s 00:18:31.761 user 0m31.323s 00:18:31.761 sys 0m28.837s 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:31.761 ************************************ 00:18:31.761 END TEST nvmf_vfio_user_fuzz 00:18:31.761 ************************************ 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:31.761 ************************************ 00:18:31.761 START TEST nvmf_auth_target 00:18:31.761 ************************************ 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:31.761 * Looking for test storage... 00:18:31.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:31.761 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:32.695 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:32.695 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:32.695 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:32.695 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:32.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:18:32.695 00:18:32.695 --- 10.0.0.2 ping statistics --- 00:18:32.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.695 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:18:32.695 00:18:32.695 --- 10.0.0.1 ping statistics --- 00:18:32.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.695 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2272055 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2272055 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2272055 ']' 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:32.695 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.953 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:32.953 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:32.953 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.953 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:32.953 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.953 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.953 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2272083 00:18:32.953 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:32.953 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:32.953 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:32.953 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:32.953 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:32.953 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:32.953 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:32.953 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:32.953 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:33.211 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a828ce8cb052fd4a0dd5fdb665afc8665796f28d59609d05 00:18:33.211 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:33.212 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.8vM 00:18:33.212 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a828ce8cb052fd4a0dd5fdb665afc8665796f28d59609d05 0 00:18:33.212 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a828ce8cb052fd4a0dd5fdb665afc8665796f28d59609d05 0 00:18:33.212 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.212 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.212 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a828ce8cb052fd4a0dd5fdb665afc8665796f28d59609d05 00:18:33.212 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:33.212 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.8vM 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.8vM 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.8vM 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d54f1b5e228befa57d66201cecb02e30df92918da25b45240c4afa018e7af621 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ie0 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d54f1b5e228befa57d66201cecb02e30df92918da25b45240c4afa018e7af621 3 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d54f1b5e228befa57d66201cecb02e30df92918da25b45240c4afa018e7af621 3 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d54f1b5e228befa57d66201cecb02e30df92918da25b45240c4afa018e7af621 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ie0 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ie0 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.ie0 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1945e4b2c2031da7c7817db2c9d92e57 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.tRQ 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1945e4b2c2031da7c7817db2c9d92e57 1 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1945e4b2c2031da7c7817db2c9d92e57 1 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1945e4b2c2031da7c7817db2c9d92e57 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.tRQ 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.tRQ 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.tRQ 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1a01eade1d3ef13fe8af926ae95d370b56362b34a8498dd3 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Y6q 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1a01eade1d3ef13fe8af926ae95d370b56362b34a8498dd3 2 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1a01eade1d3ef13fe8af926ae95d370b56362b34a8498dd3 2 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1a01eade1d3ef13fe8af926ae95d370b56362b34a8498dd3 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Y6q 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Y6q 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Y6q 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a40b7c2f6845e10dd6de31e219d7d6bd9b334351fcaec2c9 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.AfB 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a40b7c2f6845e10dd6de31e219d7d6bd9b334351fcaec2c9 2 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a40b7c2f6845e10dd6de31e219d7d6bd9b334351fcaec2c9 2 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a40b7c2f6845e10dd6de31e219d7d6bd9b334351fcaec2c9 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.212 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.AfB 00:18:33.470 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.AfB 00:18:33.470 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.AfB 00:18:33.470 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:33.470 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.470 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.470 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.470 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:33.470 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:33.470 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:33.470 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f518c690cea8e5a63d6710adc342b13f 00:18:33.470 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:33.470 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.G9H 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f518c690cea8e5a63d6710adc342b13f 1 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f518c690cea8e5a63d6710adc342b13f 1 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f518c690cea8e5a63d6710adc342b13f 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.G9H 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.G9H 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.G9H 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6ca7b41a3cd50fecb68d1dc44fdbd6dfc4a932547cb84b275c158822c686eb68 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.h6K 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6ca7b41a3cd50fecb68d1dc44fdbd6dfc4a932547cb84b275c158822c686eb68 3 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6ca7b41a3cd50fecb68d1dc44fdbd6dfc4a932547cb84b275c158822c686eb68 3 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6ca7b41a3cd50fecb68d1dc44fdbd6dfc4a932547cb84b275c158822c686eb68 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.h6K 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.h6K 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.h6K 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2272055 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2272055 ']' 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:33.471 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.729 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:33.729 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:33.729 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2272083 /var/tmp/host.sock 00:18:33.729 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2272083 ']' 00:18:33.729 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:33.729 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:33.729 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:33.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:33.729 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:33.729 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.986 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:33.986 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:33.986 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:33.986 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.986 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.986 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.986 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:33.986 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8vM 00:18:33.986 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.986 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.986 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.986 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.8vM 00:18:33.986 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.8vM 00:18:34.244 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.ie0 ]] 00:18:34.244 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ie0 00:18:34.244 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.244 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.244 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.244 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ie0 00:18:34.244 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ie0 00:18:34.502 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:34.502 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tRQ 00:18:34.502 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.502 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.502 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.502 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.tRQ 00:18:34.502 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.tRQ 00:18:34.760 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Y6q ]] 00:18:34.760 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Y6q 00:18:34.760 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.760 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.760 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.760 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Y6q 00:18:34.760 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Y6q 00:18:35.018 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:35.018 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AfB 00:18:35.018 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.018 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.018 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.018 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.AfB 00:18:35.018 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.AfB 00:18:35.276 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.G9H ]] 00:18:35.276 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.G9H 00:18:35.276 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.276 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.276 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.276 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.G9H 00:18:35.276 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.G9H 00:18:35.534 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:35.534 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.h6K 00:18:35.534 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.534 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.534 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.534 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.h6K 00:18:35.534 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.h6K 00:18:35.792 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:35.792 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:35.792 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.792 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.792 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:35.792 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:36.049 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:36.049 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.049 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:36.049 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:36.049 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:36.049 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.049 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.049 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.049 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.049 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.049 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.050 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.307 00:18:36.307 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.307 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.307 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.565 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.565 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.565 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.565 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.565 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.565 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.565 { 00:18:36.565 "cntlid": 1, 00:18:36.565 "qid": 0, 00:18:36.565 "state": "enabled", 00:18:36.565 "thread": "nvmf_tgt_poll_group_000", 00:18:36.565 "listen_address": { 00:18:36.565 "trtype": "TCP", 00:18:36.565 "adrfam": "IPv4", 00:18:36.565 "traddr": "10.0.0.2", 00:18:36.565 "trsvcid": "4420" 00:18:36.565 }, 00:18:36.565 "peer_address": { 00:18:36.565 "trtype": "TCP", 00:18:36.565 "adrfam": "IPv4", 00:18:36.565 "traddr": "10.0.0.1", 00:18:36.565 "trsvcid": "49560" 00:18:36.565 }, 00:18:36.565 "auth": { 00:18:36.565 "state": "completed", 00:18:36.565 "digest": "sha256", 00:18:36.565 "dhgroup": "null" 00:18:36.565 } 00:18:36.565 } 00:18:36.565 ]' 00:18:36.565 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.565 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.565 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.565 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:36.565 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.565 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.565 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.565 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.828 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:18:37.802 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.802 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.802 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.802 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.802 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.802 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.802 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:37.802 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:38.060 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:38.060 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.060 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.060 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:38.060 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:38.060 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.060 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.060 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.060 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.060 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.060 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.060 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.318 00:18:38.576 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.576 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.576 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.834 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.834 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.834 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.834 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.834 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.834 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.834 { 00:18:38.834 "cntlid": 3, 00:18:38.834 "qid": 0, 00:18:38.834 "state": "enabled", 00:18:38.834 "thread": "nvmf_tgt_poll_group_000", 00:18:38.834 "listen_address": { 00:18:38.834 "trtype": "TCP", 00:18:38.834 "adrfam": "IPv4", 00:18:38.834 "traddr": "10.0.0.2", 00:18:38.834 "trsvcid": "4420" 00:18:38.834 }, 00:18:38.834 "peer_address": { 00:18:38.834 "trtype": "TCP", 00:18:38.834 "adrfam": "IPv4", 00:18:38.834 "traddr": "10.0.0.1", 00:18:38.834 "trsvcid": "49236" 00:18:38.834 }, 00:18:38.834 "auth": { 00:18:38.834 "state": "completed", 00:18:38.834 "digest": "sha256", 00:18:38.834 "dhgroup": "null" 00:18:38.834 } 00:18:38.834 } 00:18:38.834 ]' 00:18:38.834 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.834 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.834 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.834 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:38.834 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.834 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.834 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.834 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.092 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:18:40.025 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.025 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.025 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.025 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.025 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.025 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.025 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:40.025 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:40.283 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:40.283 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.283 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:40.283 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:40.283 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.283 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.283 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.283 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.283 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.283 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.283 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.283 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.541 00:18:40.541 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.541 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.541 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.799 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.799 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.799 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.799 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.799 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.799 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.799 { 00:18:40.799 "cntlid": 5, 00:18:40.799 "qid": 0, 00:18:40.799 "state": "enabled", 00:18:40.799 "thread": "nvmf_tgt_poll_group_000", 00:18:40.799 "listen_address": { 00:18:40.799 "trtype": "TCP", 00:18:40.799 "adrfam": "IPv4", 00:18:40.799 "traddr": "10.0.0.2", 00:18:40.799 "trsvcid": "4420" 00:18:40.799 }, 00:18:40.799 "peer_address": { 00:18:40.799 "trtype": "TCP", 00:18:40.799 "adrfam": "IPv4", 00:18:40.799 "traddr": "10.0.0.1", 00:18:40.799 "trsvcid": "49258" 00:18:40.799 }, 00:18:40.799 "auth": { 00:18:40.799 "state": "completed", 00:18:40.799 "digest": "sha256", 00:18:40.799 "dhgroup": "null" 00:18:40.799 } 00:18:40.799 } 00:18:40.799 ]' 00:18:40.799 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.799 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.799 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.057 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:41.057 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.057 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.057 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.057 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.314 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:18:42.245 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.245 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.245 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.245 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.245 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.245 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.245 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.245 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.502 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:42.502 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.502 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:42.502 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:42.502 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.502 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.502 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:42.502 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.502 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.502 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.502 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.502 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.760 00:18:42.760 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.760 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.760 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.018 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.018 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.018 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.018 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.018 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.018 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.018 { 00:18:43.018 "cntlid": 7, 00:18:43.018 "qid": 0, 00:18:43.018 "state": "enabled", 00:18:43.018 "thread": "nvmf_tgt_poll_group_000", 00:18:43.018 "listen_address": { 00:18:43.018 "trtype": "TCP", 00:18:43.018 "adrfam": "IPv4", 00:18:43.018 "traddr": "10.0.0.2", 00:18:43.018 "trsvcid": "4420" 00:18:43.018 }, 00:18:43.018 "peer_address": { 00:18:43.018 "trtype": "TCP", 00:18:43.018 "adrfam": "IPv4", 00:18:43.018 "traddr": "10.0.0.1", 00:18:43.018 "trsvcid": "49278" 00:18:43.018 }, 00:18:43.018 "auth": { 00:18:43.018 "state": "completed", 00:18:43.018 "digest": "sha256", 00:18:43.018 "dhgroup": "null" 00:18:43.018 } 00:18:43.018 } 00:18:43.018 ]' 00:18:43.018 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.018 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.018 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.018 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:43.018 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.276 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.276 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.276 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.534 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:18:44.467 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.467 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.467 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.467 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.467 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.467 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.467 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.467 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:44.467 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:44.726 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:44.726 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.726 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:44.726 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:44.726 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:44.726 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.726 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.726 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.726 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.726 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.726 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.726 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.984 00:18:44.984 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.984 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.984 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.242 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.242 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.242 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.242 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.242 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.242 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.242 { 00:18:45.242 "cntlid": 9, 00:18:45.242 "qid": 0, 00:18:45.242 "state": "enabled", 00:18:45.242 "thread": "nvmf_tgt_poll_group_000", 00:18:45.242 "listen_address": { 00:18:45.242 "trtype": "TCP", 00:18:45.242 "adrfam": "IPv4", 00:18:45.242 "traddr": "10.0.0.2", 00:18:45.242 "trsvcid": "4420" 00:18:45.242 }, 00:18:45.242 "peer_address": { 00:18:45.242 "trtype": "TCP", 00:18:45.242 "adrfam": "IPv4", 00:18:45.242 "traddr": "10.0.0.1", 00:18:45.242 "trsvcid": "49312" 00:18:45.242 }, 00:18:45.242 "auth": { 00:18:45.242 "state": "completed", 00:18:45.242 "digest": "sha256", 00:18:45.242 "dhgroup": "ffdhe2048" 00:18:45.242 } 00:18:45.242 } 00:18:45.242 ]' 00:18:45.242 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.242 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.242 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.242 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:45.242 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.242 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.242 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.242 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.500 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:18:46.433 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.433 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.433 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.433 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.691 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.691 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.691 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.691 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.691 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:46.691 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.691 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:46.691 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:46.691 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:46.691 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.691 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.691 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.691 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.691 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.691 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.691 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.256 00:18:47.256 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.256 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.256 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.513 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.513 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.513 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.513 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.513 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.513 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.513 { 00:18:47.513 "cntlid": 11, 00:18:47.513 "qid": 0, 00:18:47.513 "state": "enabled", 00:18:47.513 "thread": "nvmf_tgt_poll_group_000", 00:18:47.513 "listen_address": { 00:18:47.513 "trtype": "TCP", 00:18:47.513 "adrfam": "IPv4", 00:18:47.513 "traddr": "10.0.0.2", 00:18:47.513 "trsvcid": "4420" 00:18:47.513 }, 00:18:47.513 "peer_address": { 00:18:47.513 "trtype": "TCP", 00:18:47.513 "adrfam": "IPv4", 00:18:47.513 "traddr": "10.0.0.1", 00:18:47.513 "trsvcid": "49340" 00:18:47.513 }, 00:18:47.513 "auth": { 00:18:47.513 "state": "completed", 00:18:47.513 "digest": "sha256", 00:18:47.513 "dhgroup": "ffdhe2048" 00:18:47.513 } 00:18:47.513 } 00:18:47.513 ]' 00:18:47.513 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.513 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.513 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.513 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:47.513 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.513 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.513 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.513 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.770 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:18:48.703 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.703 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.703 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.703 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.703 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.703 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.703 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:48.703 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:48.961 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:48.961 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.961 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.961 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:48.961 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:48.961 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.961 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.961 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.961 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.961 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.961 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.961 01:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.218 00:18:49.218 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.218 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.218 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.476 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.476 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.476 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.476 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.476 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.476 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.476 { 00:18:49.476 "cntlid": 13, 00:18:49.476 "qid": 0, 00:18:49.476 "state": "enabled", 00:18:49.476 "thread": "nvmf_tgt_poll_group_000", 00:18:49.476 "listen_address": { 00:18:49.476 "trtype": "TCP", 00:18:49.476 "adrfam": "IPv4", 00:18:49.476 "traddr": "10.0.0.2", 00:18:49.476 "trsvcid": "4420" 00:18:49.476 }, 00:18:49.476 "peer_address": { 00:18:49.476 "trtype": "TCP", 00:18:49.476 "adrfam": "IPv4", 00:18:49.476 "traddr": "10.0.0.1", 00:18:49.476 "trsvcid": "59644" 00:18:49.476 }, 00:18:49.476 "auth": { 00:18:49.476 "state": "completed", 00:18:49.476 "digest": "sha256", 00:18:49.476 "dhgroup": "ffdhe2048" 00:18:49.476 } 00:18:49.476 } 00:18:49.476 ]' 00:18:49.476 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.476 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.476 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.733 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:49.733 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.733 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.733 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.733 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.990 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:18:50.922 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.922 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.922 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.922 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.922 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.922 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.922 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:50.922 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.179 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:51.179 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.179 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.179 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:51.179 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:51.179 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.179 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:51.179 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.179 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.179 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.179 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.179 01:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.437 00:18:51.437 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.437 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.437 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.693 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.693 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.693 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.693 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.693 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.693 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.693 { 00:18:51.693 "cntlid": 15, 00:18:51.693 "qid": 0, 00:18:51.693 "state": "enabled", 00:18:51.693 "thread": "nvmf_tgt_poll_group_000", 00:18:51.693 "listen_address": { 00:18:51.693 "trtype": "TCP", 00:18:51.693 "adrfam": "IPv4", 00:18:51.693 "traddr": "10.0.0.2", 00:18:51.693 "trsvcid": "4420" 00:18:51.693 }, 00:18:51.693 "peer_address": { 00:18:51.693 "trtype": "TCP", 00:18:51.693 "adrfam": "IPv4", 00:18:51.693 "traddr": "10.0.0.1", 00:18:51.693 "trsvcid": "59662" 00:18:51.693 }, 00:18:51.693 "auth": { 00:18:51.693 "state": "completed", 00:18:51.693 "digest": "sha256", 00:18:51.693 "dhgroup": "ffdhe2048" 00:18:51.693 } 00:18:51.693 } 00:18:51.693 ]' 00:18:51.693 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.693 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.693 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.693 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:51.693 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.693 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.693 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.693 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.950 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:18:53.346 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.347 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.347 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.347 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.347 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.347 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.347 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.347 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.347 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.347 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:53.347 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.347 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.347 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:53.347 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:53.347 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.347 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.347 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.347 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.347 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.347 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.347 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.645 00:18:53.645 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.645 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.645 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.903 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.903 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.903 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.903 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.903 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.903 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.903 { 00:18:53.903 "cntlid": 17, 00:18:53.903 "qid": 0, 00:18:53.903 "state": "enabled", 00:18:53.903 "thread": "nvmf_tgt_poll_group_000", 00:18:53.903 "listen_address": { 00:18:53.903 "trtype": "TCP", 00:18:53.903 "adrfam": "IPv4", 00:18:53.903 "traddr": "10.0.0.2", 00:18:53.903 "trsvcid": "4420" 00:18:53.903 }, 00:18:53.903 "peer_address": { 00:18:53.903 "trtype": "TCP", 00:18:53.903 "adrfam": "IPv4", 00:18:53.903 "traddr": "10.0.0.1", 00:18:53.903 "trsvcid": "59686" 00:18:53.903 }, 00:18:53.903 "auth": { 00:18:53.903 "state": "completed", 00:18:53.903 "digest": "sha256", 00:18:53.903 "dhgroup": "ffdhe3072" 00:18:53.903 } 00:18:53.903 } 00:18:53.903 ]' 00:18:53.903 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.903 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.903 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.903 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:53.903 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.903 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.903 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.903 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.162 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:18:55.096 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.096 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.096 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.096 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.096 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.096 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.096 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.096 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.354 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:55.354 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.354 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.354 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:55.354 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:55.354 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.354 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.354 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.354 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.354 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.354 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.354 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.922 00:18:55.922 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.922 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.922 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.922 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.180 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.180 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.180 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.180 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.180 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.180 { 00:18:56.180 "cntlid": 19, 00:18:56.180 "qid": 0, 00:18:56.180 "state": "enabled", 00:18:56.180 "thread": "nvmf_tgt_poll_group_000", 00:18:56.180 "listen_address": { 00:18:56.180 "trtype": "TCP", 00:18:56.180 "adrfam": "IPv4", 00:18:56.180 "traddr": "10.0.0.2", 00:18:56.180 "trsvcid": "4420" 00:18:56.180 }, 00:18:56.180 "peer_address": { 00:18:56.180 "trtype": "TCP", 00:18:56.180 "adrfam": "IPv4", 00:18:56.180 "traddr": "10.0.0.1", 00:18:56.180 "trsvcid": "59728" 00:18:56.180 }, 00:18:56.180 "auth": { 00:18:56.181 "state": "completed", 00:18:56.181 "digest": "sha256", 00:18:56.181 "dhgroup": "ffdhe3072" 00:18:56.181 } 00:18:56.181 } 00:18:56.181 ]' 00:18:56.181 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.181 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.181 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.181 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:56.181 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.181 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.181 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.181 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.438 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:18:57.373 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.373 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.373 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.373 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.373 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.373 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.373 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.373 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.631 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:57.631 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.631 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:57.631 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:57.631 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:57.631 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.631 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.631 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.631 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.631 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.631 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.631 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.890 00:18:57.890 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.890 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.890 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.149 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.149 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.149 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.149 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.149 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.407 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.407 { 00:18:58.407 "cntlid": 21, 00:18:58.407 "qid": 0, 00:18:58.407 "state": "enabled", 00:18:58.407 "thread": "nvmf_tgt_poll_group_000", 00:18:58.407 "listen_address": { 00:18:58.407 "trtype": "TCP", 00:18:58.407 "adrfam": "IPv4", 00:18:58.407 "traddr": "10.0.0.2", 00:18:58.407 "trsvcid": "4420" 00:18:58.407 }, 00:18:58.407 "peer_address": { 00:18:58.407 "trtype": "TCP", 00:18:58.407 "adrfam": "IPv4", 00:18:58.407 "traddr": "10.0.0.1", 00:18:58.407 "trsvcid": "32860" 00:18:58.407 }, 00:18:58.407 "auth": { 00:18:58.407 "state": "completed", 00:18:58.407 "digest": "sha256", 00:18:58.407 "dhgroup": "ffdhe3072" 00:18:58.407 } 00:18:58.407 } 00:18:58.407 ]' 00:18:58.407 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.407 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.407 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.407 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.407 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.407 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.407 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.407 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.665 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:18:59.602 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.602 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.602 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.602 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.602 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.602 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.602 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.602 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.860 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:59.860 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.860 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.860 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:59.860 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.860 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.860 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:59.860 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.860 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.860 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.860 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.860 01:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.118 00:19:00.118 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.118 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.118 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.375 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.375 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.375 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.375 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.375 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.375 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.375 { 00:19:00.375 "cntlid": 23, 00:19:00.375 "qid": 0, 00:19:00.375 "state": "enabled", 00:19:00.375 "thread": "nvmf_tgt_poll_group_000", 00:19:00.375 "listen_address": { 00:19:00.375 "trtype": "TCP", 00:19:00.375 "adrfam": "IPv4", 00:19:00.375 "traddr": "10.0.0.2", 00:19:00.375 "trsvcid": "4420" 00:19:00.375 }, 00:19:00.375 "peer_address": { 00:19:00.375 "trtype": "TCP", 00:19:00.375 "adrfam": "IPv4", 00:19:00.375 "traddr": "10.0.0.1", 00:19:00.375 "trsvcid": "32882" 00:19:00.375 }, 00:19:00.375 "auth": { 00:19:00.375 "state": "completed", 00:19:00.375 "digest": "sha256", 00:19:00.375 "dhgroup": "ffdhe3072" 00:19:00.375 } 00:19:00.375 } 00:19:00.375 ]' 00:19:00.376 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.633 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.633 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.633 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:00.633 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.633 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.633 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.633 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.891 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:19:01.826 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.826 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.826 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.826 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.826 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.826 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.826 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.826 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:01.826 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:02.083 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:02.083 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.083 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:02.083 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:02.083 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:02.083 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.083 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.083 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.083 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.083 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.083 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.083 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.341 00:19:02.341 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.341 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.341 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.600 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.858 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.858 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.858 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.858 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.858 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.858 { 00:19:02.858 "cntlid": 25, 00:19:02.858 "qid": 0, 00:19:02.858 "state": "enabled", 00:19:02.858 "thread": "nvmf_tgt_poll_group_000", 00:19:02.858 "listen_address": { 00:19:02.858 "trtype": "TCP", 00:19:02.858 "adrfam": "IPv4", 00:19:02.858 "traddr": "10.0.0.2", 00:19:02.858 "trsvcid": "4420" 00:19:02.858 }, 00:19:02.858 "peer_address": { 00:19:02.858 "trtype": "TCP", 00:19:02.858 "adrfam": "IPv4", 00:19:02.858 "traddr": "10.0.0.1", 00:19:02.858 "trsvcid": "32908" 00:19:02.858 }, 00:19:02.858 "auth": { 00:19:02.858 "state": "completed", 00:19:02.858 "digest": "sha256", 00:19:02.858 "dhgroup": "ffdhe4096" 00:19:02.858 } 00:19:02.858 } 00:19:02.858 ]' 00:19:02.858 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.858 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.858 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.858 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:02.858 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.858 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.858 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.858 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.116 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:19:04.052 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.052 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.052 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.052 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.052 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.052 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.052 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:04.052 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:04.311 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:04.311 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.311 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.311 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:04.311 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:04.311 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.311 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.311 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.311 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.311 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.311 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.311 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.568 00:19:04.568 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.568 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.568 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.827 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.827 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.827 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.827 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.085 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.085 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.085 { 00:19:05.085 "cntlid": 27, 00:19:05.085 "qid": 0, 00:19:05.085 "state": "enabled", 00:19:05.085 "thread": "nvmf_tgt_poll_group_000", 00:19:05.085 "listen_address": { 00:19:05.085 "trtype": "TCP", 00:19:05.085 "adrfam": "IPv4", 00:19:05.085 "traddr": "10.0.0.2", 00:19:05.085 "trsvcid": "4420" 00:19:05.085 }, 00:19:05.085 "peer_address": { 00:19:05.085 "trtype": "TCP", 00:19:05.085 "adrfam": "IPv4", 00:19:05.085 "traddr": "10.0.0.1", 00:19:05.085 "trsvcid": "32926" 00:19:05.085 }, 00:19:05.085 "auth": { 00:19:05.085 "state": "completed", 00:19:05.085 "digest": "sha256", 00:19:05.085 "dhgroup": "ffdhe4096" 00:19:05.085 } 00:19:05.085 } 00:19:05.085 ]' 00:19:05.085 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.085 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.085 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.085 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:05.085 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.085 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.085 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.085 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.342 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:19:06.277 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.277 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.277 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.277 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.277 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.277 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.277 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:06.277 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:06.534 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:06.534 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.534 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.534 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:06.534 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:06.534 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.534 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.534 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.534 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.534 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.534 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.534 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.793 00:19:07.051 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.051 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.051 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.309 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.309 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.309 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.309 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.309 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.309 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.309 { 00:19:07.309 "cntlid": 29, 00:19:07.309 "qid": 0, 00:19:07.309 "state": "enabled", 00:19:07.309 "thread": "nvmf_tgt_poll_group_000", 00:19:07.309 "listen_address": { 00:19:07.309 "trtype": "TCP", 00:19:07.309 "adrfam": "IPv4", 00:19:07.309 "traddr": "10.0.0.2", 00:19:07.309 "trsvcid": "4420" 00:19:07.309 }, 00:19:07.309 "peer_address": { 00:19:07.309 "trtype": "TCP", 00:19:07.309 "adrfam": "IPv4", 00:19:07.309 "traddr": "10.0.0.1", 00:19:07.309 "trsvcid": "32948" 00:19:07.309 }, 00:19:07.309 "auth": { 00:19:07.309 "state": "completed", 00:19:07.309 "digest": "sha256", 00:19:07.309 "dhgroup": "ffdhe4096" 00:19:07.309 } 00:19:07.309 } 00:19:07.309 ]' 00:19:07.309 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.309 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.309 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.309 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:07.309 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.309 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.309 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.309 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.567 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:19:08.504 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.504 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.504 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.504 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.504 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.504 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.504 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:08.504 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:08.762 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:08.762 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.762 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:08.762 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:08.762 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:08.762 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.762 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:08.762 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.762 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.762 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.762 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.762 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.330 00:19:09.330 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.330 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.330 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.626 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.626 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.626 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.626 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.626 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.626 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.626 { 00:19:09.626 "cntlid": 31, 00:19:09.626 "qid": 0, 00:19:09.626 "state": "enabled", 00:19:09.626 "thread": "nvmf_tgt_poll_group_000", 00:19:09.626 "listen_address": { 00:19:09.626 "trtype": "TCP", 00:19:09.626 "adrfam": "IPv4", 00:19:09.626 "traddr": "10.0.0.2", 00:19:09.626 "trsvcid": "4420" 00:19:09.626 }, 00:19:09.626 "peer_address": { 00:19:09.626 "trtype": "TCP", 00:19:09.626 "adrfam": "IPv4", 00:19:09.626 "traddr": "10.0.0.1", 00:19:09.626 "trsvcid": "36886" 00:19:09.626 }, 00:19:09.626 "auth": { 00:19:09.626 "state": "completed", 00:19:09.626 "digest": "sha256", 00:19:09.626 "dhgroup": "ffdhe4096" 00:19:09.626 } 00:19:09.626 } 00:19:09.626 ]' 00:19:09.626 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.626 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.626 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.626 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:09.626 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.626 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.626 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.626 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.883 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:19:10.816 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.816 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.816 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.816 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.816 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.816 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.816 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.816 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:10.816 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:11.073 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:11.073 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.073 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.073 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:11.073 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:11.073 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.073 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.073 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.073 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.073 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.073 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.073 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.654 00:19:11.654 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.654 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.654 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.911 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.911 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.911 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.911 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.911 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.911 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.911 { 00:19:11.911 "cntlid": 33, 00:19:11.911 "qid": 0, 00:19:11.911 "state": "enabled", 00:19:11.911 "thread": "nvmf_tgt_poll_group_000", 00:19:11.911 "listen_address": { 00:19:11.911 "trtype": "TCP", 00:19:11.911 "adrfam": "IPv4", 00:19:11.911 "traddr": "10.0.0.2", 00:19:11.911 "trsvcid": "4420" 00:19:11.911 }, 00:19:11.911 "peer_address": { 00:19:11.911 "trtype": "TCP", 00:19:11.911 "adrfam": "IPv4", 00:19:11.911 "traddr": "10.0.0.1", 00:19:11.911 "trsvcid": "36918" 00:19:11.911 }, 00:19:11.911 "auth": { 00:19:11.911 "state": "completed", 00:19:11.911 "digest": "sha256", 00:19:11.911 "dhgroup": "ffdhe6144" 00:19:11.911 } 00:19:11.911 } 00:19:11.911 ]' 00:19:11.911 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.911 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.911 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.911 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:11.911 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.911 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.911 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.911 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.169 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:19:13.104 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.104 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.104 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.104 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.104 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.104 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.104 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:13.104 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:13.362 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:13.362 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.362 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.362 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:13.362 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:13.362 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.362 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.362 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.362 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.362 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.362 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.362 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.928 00:19:13.928 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.928 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.928 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.185 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.185 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.185 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.185 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.185 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.185 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.185 { 00:19:14.185 "cntlid": 35, 00:19:14.185 "qid": 0, 00:19:14.185 "state": "enabled", 00:19:14.185 "thread": "nvmf_tgt_poll_group_000", 00:19:14.185 "listen_address": { 00:19:14.185 "trtype": "TCP", 00:19:14.186 "adrfam": "IPv4", 00:19:14.186 "traddr": "10.0.0.2", 00:19:14.186 "trsvcid": "4420" 00:19:14.186 }, 00:19:14.186 "peer_address": { 00:19:14.186 "trtype": "TCP", 00:19:14.186 "adrfam": "IPv4", 00:19:14.186 "traddr": "10.0.0.1", 00:19:14.186 "trsvcid": "36944" 00:19:14.186 }, 00:19:14.186 "auth": { 00:19:14.186 "state": "completed", 00:19:14.186 "digest": "sha256", 00:19:14.186 "dhgroup": "ffdhe6144" 00:19:14.186 } 00:19:14.186 } 00:19:14.186 ]' 00:19:14.186 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.186 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.186 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.444 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:14.444 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.444 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.444 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.444 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.713 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:19:15.647 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.647 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.647 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.647 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.647 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.647 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.647 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.647 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.905 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:15.905 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.905 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.905 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:15.905 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:15.905 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.905 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.905 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.905 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.905 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.905 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.905 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.469 00:19:16.469 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.469 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.469 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.726 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.726 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.726 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.726 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.726 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.726 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.726 { 00:19:16.726 "cntlid": 37, 00:19:16.726 "qid": 0, 00:19:16.726 "state": "enabled", 00:19:16.726 "thread": "nvmf_tgt_poll_group_000", 00:19:16.726 "listen_address": { 00:19:16.726 "trtype": "TCP", 00:19:16.726 "adrfam": "IPv4", 00:19:16.726 "traddr": "10.0.0.2", 00:19:16.726 "trsvcid": "4420" 00:19:16.726 }, 00:19:16.726 "peer_address": { 00:19:16.726 "trtype": "TCP", 00:19:16.726 "adrfam": "IPv4", 00:19:16.726 "traddr": "10.0.0.1", 00:19:16.726 "trsvcid": "36976" 00:19:16.726 }, 00:19:16.726 "auth": { 00:19:16.726 "state": "completed", 00:19:16.726 "digest": "sha256", 00:19:16.726 "dhgroup": "ffdhe6144" 00:19:16.726 } 00:19:16.726 } 00:19:16.726 ]' 00:19:16.726 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.726 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.726 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.726 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:16.726 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.726 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.726 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.726 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.984 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:19:17.920 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.920 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.920 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.920 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.920 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.920 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.920 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:17.920 01:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.178 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:18.178 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.178 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.178 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:18.178 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:18.178 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.178 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:18.178 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.178 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.178 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.178 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.178 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.747 00:19:18.747 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.747 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.747 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.005 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.005 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.005 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.005 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.005 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.005 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.005 { 00:19:19.005 "cntlid": 39, 00:19:19.005 "qid": 0, 00:19:19.005 "state": "enabled", 00:19:19.005 "thread": "nvmf_tgt_poll_group_000", 00:19:19.005 "listen_address": { 00:19:19.005 "trtype": "TCP", 00:19:19.005 "adrfam": "IPv4", 00:19:19.005 "traddr": "10.0.0.2", 00:19:19.005 "trsvcid": "4420" 00:19:19.005 }, 00:19:19.005 "peer_address": { 00:19:19.005 "trtype": "TCP", 00:19:19.005 "adrfam": "IPv4", 00:19:19.005 "traddr": "10.0.0.1", 00:19:19.005 "trsvcid": "60210" 00:19:19.005 }, 00:19:19.005 "auth": { 00:19:19.005 "state": "completed", 00:19:19.005 "digest": "sha256", 00:19:19.005 "dhgroup": "ffdhe6144" 00:19:19.005 } 00:19:19.005 } 00:19:19.005 ]' 00:19:19.005 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.005 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.005 01:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.005 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:19.005 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.263 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.263 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.263 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.521 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:19:20.457 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.457 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.457 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.457 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.457 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.457 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.457 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.457 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.457 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.715 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:20.715 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.715 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:20.715 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:20.715 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:20.715 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.715 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.715 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.715 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.715 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.715 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.715 01:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.653 00:19:21.653 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.653 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.653 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.653 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.653 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.653 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.653 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.910 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.910 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.910 { 00:19:21.910 "cntlid": 41, 00:19:21.910 "qid": 0, 00:19:21.910 "state": "enabled", 00:19:21.910 "thread": "nvmf_tgt_poll_group_000", 00:19:21.910 "listen_address": { 00:19:21.910 "trtype": "TCP", 00:19:21.910 "adrfam": "IPv4", 00:19:21.910 "traddr": "10.0.0.2", 00:19:21.910 "trsvcid": "4420" 00:19:21.910 }, 00:19:21.910 "peer_address": { 00:19:21.910 "trtype": "TCP", 00:19:21.910 "adrfam": "IPv4", 00:19:21.910 "traddr": "10.0.0.1", 00:19:21.910 "trsvcid": "60232" 00:19:21.910 }, 00:19:21.910 "auth": { 00:19:21.910 "state": "completed", 00:19:21.910 "digest": "sha256", 00:19:21.910 "dhgroup": "ffdhe8192" 00:19:21.910 } 00:19:21.910 } 00:19:21.910 ]' 00:19:21.910 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.910 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.910 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.910 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:21.910 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.910 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.910 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.910 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.169 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:19:23.105 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.105 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.105 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.105 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.105 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.105 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.105 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.105 01:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.363 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:23.363 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.363 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:23.363 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:23.363 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:23.363 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.363 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.363 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.363 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.363 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.363 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.363 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.302 00:19:24.302 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.302 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.302 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.560 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.560 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.560 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.560 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.560 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.560 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.560 { 00:19:24.560 "cntlid": 43, 00:19:24.560 "qid": 0, 00:19:24.560 "state": "enabled", 00:19:24.560 "thread": "nvmf_tgt_poll_group_000", 00:19:24.560 "listen_address": { 00:19:24.560 "trtype": "TCP", 00:19:24.560 "adrfam": "IPv4", 00:19:24.560 "traddr": "10.0.0.2", 00:19:24.560 "trsvcid": "4420" 00:19:24.560 }, 00:19:24.560 "peer_address": { 00:19:24.560 "trtype": "TCP", 00:19:24.560 "adrfam": "IPv4", 00:19:24.560 "traddr": "10.0.0.1", 00:19:24.560 "trsvcid": "60260" 00:19:24.560 }, 00:19:24.560 "auth": { 00:19:24.560 "state": "completed", 00:19:24.560 "digest": "sha256", 00:19:24.560 "dhgroup": "ffdhe8192" 00:19:24.560 } 00:19:24.560 } 00:19:24.560 ]' 00:19:24.560 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.560 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.560 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.560 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.560 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.560 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.560 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.560 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.819 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:19:25.755 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.755 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.755 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.755 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.755 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.755 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.755 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.755 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:26.015 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:26.015 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.015 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.015 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:26.015 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:26.015 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.015 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.015 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.015 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.015 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.015 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.015 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.980 00:19:26.980 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.980 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.980 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.238 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.238 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.238 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.238 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.238 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.238 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.238 { 00:19:27.238 "cntlid": 45, 00:19:27.238 "qid": 0, 00:19:27.238 "state": "enabled", 00:19:27.238 "thread": "nvmf_tgt_poll_group_000", 00:19:27.238 "listen_address": { 00:19:27.238 "trtype": "TCP", 00:19:27.238 "adrfam": "IPv4", 00:19:27.238 "traddr": "10.0.0.2", 00:19:27.238 "trsvcid": "4420" 00:19:27.238 }, 00:19:27.238 "peer_address": { 00:19:27.238 "trtype": "TCP", 00:19:27.238 "adrfam": "IPv4", 00:19:27.238 "traddr": "10.0.0.1", 00:19:27.238 "trsvcid": "60288" 00:19:27.238 }, 00:19:27.238 "auth": { 00:19:27.238 "state": "completed", 00:19:27.238 "digest": "sha256", 00:19:27.238 "dhgroup": "ffdhe8192" 00:19:27.238 } 00:19:27.238 } 00:19:27.238 ]' 00:19:27.238 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.238 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.238 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.238 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:27.238 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.496 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.496 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.496 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.754 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:19:28.690 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.690 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.690 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.690 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.690 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.690 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.690 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.690 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.947 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:28.947 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.947 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.947 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:28.947 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:28.947 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.947 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:28.947 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.947 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.947 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.947 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.947 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:29.882 00:19:29.882 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.882 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.882 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.139 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.139 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.139 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.139 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.139 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.139 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.139 { 00:19:30.139 "cntlid": 47, 00:19:30.139 "qid": 0, 00:19:30.139 "state": "enabled", 00:19:30.139 "thread": "nvmf_tgt_poll_group_000", 00:19:30.139 "listen_address": { 00:19:30.139 "trtype": "TCP", 00:19:30.139 "adrfam": "IPv4", 00:19:30.139 "traddr": "10.0.0.2", 00:19:30.139 "trsvcid": "4420" 00:19:30.139 }, 00:19:30.139 "peer_address": { 00:19:30.139 "trtype": "TCP", 00:19:30.139 "adrfam": "IPv4", 00:19:30.139 "traddr": "10.0.0.1", 00:19:30.139 "trsvcid": "59948" 00:19:30.139 }, 00:19:30.139 "auth": { 00:19:30.139 "state": "completed", 00:19:30.139 "digest": "sha256", 00:19:30.139 "dhgroup": "ffdhe8192" 00:19:30.139 } 00:19:30.139 } 00:19:30.139 ]' 00:19:30.139 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.139 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.139 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.139 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:30.139 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.398 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.398 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.398 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.657 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:19:31.593 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.593 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.593 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.593 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.593 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.593 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:31.593 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.593 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.593 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:31.593 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:31.850 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:31.850 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.850 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:31.850 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:31.850 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:31.850 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.850 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.850 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.850 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.850 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.850 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.850 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.106 00:19:32.106 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.106 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.106 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.364 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.364 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.364 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.364 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.364 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.364 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.364 { 00:19:32.364 "cntlid": 49, 00:19:32.364 "qid": 0, 00:19:32.364 "state": "enabled", 00:19:32.364 "thread": "nvmf_tgt_poll_group_000", 00:19:32.364 "listen_address": { 00:19:32.364 "trtype": "TCP", 00:19:32.364 "adrfam": "IPv4", 00:19:32.364 "traddr": "10.0.0.2", 00:19:32.364 "trsvcid": "4420" 00:19:32.364 }, 00:19:32.364 "peer_address": { 00:19:32.364 "trtype": "TCP", 00:19:32.364 "adrfam": "IPv4", 00:19:32.364 "traddr": "10.0.0.1", 00:19:32.364 "trsvcid": "59984" 00:19:32.364 }, 00:19:32.364 "auth": { 00:19:32.364 "state": "completed", 00:19:32.364 "digest": "sha384", 00:19:32.364 "dhgroup": "null" 00:19:32.364 } 00:19:32.364 } 00:19:32.364 ]' 00:19:32.364 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.622 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:32.622 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.622 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:32.622 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.622 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.622 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.622 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.879 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:19:33.817 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.817 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.817 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.817 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.817 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.817 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.817 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:33.817 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:34.075 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:34.075 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.075 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:34.075 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:34.075 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:34.075 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.075 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.075 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.075 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.075 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.075 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.075 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.643 00:19:34.643 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.643 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.643 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.901 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.901 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.901 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.901 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.901 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.901 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.901 { 00:19:34.901 "cntlid": 51, 00:19:34.901 "qid": 0, 00:19:34.901 "state": "enabled", 00:19:34.901 "thread": "nvmf_tgt_poll_group_000", 00:19:34.901 "listen_address": { 00:19:34.901 "trtype": "TCP", 00:19:34.901 "adrfam": "IPv4", 00:19:34.901 "traddr": "10.0.0.2", 00:19:34.901 "trsvcid": "4420" 00:19:34.901 }, 00:19:34.901 "peer_address": { 00:19:34.901 "trtype": "TCP", 00:19:34.901 "adrfam": "IPv4", 00:19:34.901 "traddr": "10.0.0.1", 00:19:34.901 "trsvcid": "60016" 00:19:34.901 }, 00:19:34.901 "auth": { 00:19:34.901 "state": "completed", 00:19:34.901 "digest": "sha384", 00:19:34.901 "dhgroup": "null" 00:19:34.901 } 00:19:34.901 } 00:19:34.901 ]' 00:19:34.901 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.901 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.901 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.901 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:34.901 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.901 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.901 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.901 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.159 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:19:36.097 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.097 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.097 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.097 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.097 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.097 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.097 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:36.097 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:36.355 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:36.355 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.355 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:36.355 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:36.355 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:36.355 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.355 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.355 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.355 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.355 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.355 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.355 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.920 00:19:36.920 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.920 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.920 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.178 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.178 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.178 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.178 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.178 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.178 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.178 { 00:19:37.178 "cntlid": 53, 00:19:37.178 "qid": 0, 00:19:37.178 "state": "enabled", 00:19:37.178 "thread": "nvmf_tgt_poll_group_000", 00:19:37.178 "listen_address": { 00:19:37.178 "trtype": "TCP", 00:19:37.178 "adrfam": "IPv4", 00:19:37.178 "traddr": "10.0.0.2", 00:19:37.178 "trsvcid": "4420" 00:19:37.178 }, 00:19:37.178 "peer_address": { 00:19:37.178 "trtype": "TCP", 00:19:37.178 "adrfam": "IPv4", 00:19:37.178 "traddr": "10.0.0.1", 00:19:37.178 "trsvcid": "60050" 00:19:37.178 }, 00:19:37.178 "auth": { 00:19:37.178 "state": "completed", 00:19:37.178 "digest": "sha384", 00:19:37.178 "dhgroup": "null" 00:19:37.178 } 00:19:37.178 } 00:19:37.178 ]' 00:19:37.178 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.178 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.178 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.178 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:37.178 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.178 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.178 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.178 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.436 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:19:38.375 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.375 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.375 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.375 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.375 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.375 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.375 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:38.375 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:38.633 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:38.633 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.633 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:38.633 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:38.633 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:38.633 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.633 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:38.633 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.633 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.633 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.633 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.633 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.891 00:19:38.891 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.891 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.891 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.149 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.149 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.149 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.149 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.149 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.149 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.149 { 00:19:39.149 "cntlid": 55, 00:19:39.149 "qid": 0, 00:19:39.149 "state": "enabled", 00:19:39.149 "thread": "nvmf_tgt_poll_group_000", 00:19:39.149 "listen_address": { 00:19:39.149 "trtype": "TCP", 00:19:39.149 "adrfam": "IPv4", 00:19:39.149 "traddr": "10.0.0.2", 00:19:39.149 "trsvcid": "4420" 00:19:39.149 }, 00:19:39.149 "peer_address": { 00:19:39.149 "trtype": "TCP", 00:19:39.149 "adrfam": "IPv4", 00:19:39.149 "traddr": "10.0.0.1", 00:19:39.149 "trsvcid": "57104" 00:19:39.149 }, 00:19:39.149 "auth": { 00:19:39.149 "state": "completed", 00:19:39.149 "digest": "sha384", 00:19:39.149 "dhgroup": "null" 00:19:39.149 } 00:19:39.149 } 00:19:39.149 ]' 00:19:39.149 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.408 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.408 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.408 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:39.408 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.408 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.408 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.408 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.666 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:19:40.602 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.602 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.602 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.602 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.602 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.602 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.602 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.602 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:40.602 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:40.860 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:40.860 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.861 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:40.861 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:40.861 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:40.861 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.861 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.861 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.861 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.861 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.861 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.861 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.429 00:19:41.429 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.429 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.429 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.429 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.429 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.429 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.429 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.429 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.429 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.429 { 00:19:41.429 "cntlid": 57, 00:19:41.429 "qid": 0, 00:19:41.429 "state": "enabled", 00:19:41.429 "thread": "nvmf_tgt_poll_group_000", 00:19:41.429 "listen_address": { 00:19:41.429 "trtype": "TCP", 00:19:41.429 "adrfam": "IPv4", 00:19:41.429 "traddr": "10.0.0.2", 00:19:41.429 "trsvcid": "4420" 00:19:41.429 }, 00:19:41.429 "peer_address": { 00:19:41.429 "trtype": "TCP", 00:19:41.429 "adrfam": "IPv4", 00:19:41.429 "traddr": "10.0.0.1", 00:19:41.429 "trsvcid": "57122" 00:19:41.429 }, 00:19:41.429 "auth": { 00:19:41.429 "state": "completed", 00:19:41.429 "digest": "sha384", 00:19:41.429 "dhgroup": "ffdhe2048" 00:19:41.429 } 00:19:41.429 } 00:19:41.429 ]' 00:19:41.429 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.687 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.687 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.687 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:41.687 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.687 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.687 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.687 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.943 01:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:19:42.917 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.917 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.917 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.917 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.917 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.917 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.917 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:42.917 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:43.178 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:43.178 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.178 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:43.178 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:43.178 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:43.178 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.178 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.178 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.178 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.178 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.178 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.178 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.435 00:19:43.435 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.435 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.435 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.693 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.693 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.693 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.693 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.693 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.693 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.693 { 00:19:43.693 "cntlid": 59, 00:19:43.693 "qid": 0, 00:19:43.693 "state": "enabled", 00:19:43.693 "thread": "nvmf_tgt_poll_group_000", 00:19:43.693 "listen_address": { 00:19:43.693 "trtype": "TCP", 00:19:43.693 "adrfam": "IPv4", 00:19:43.693 "traddr": "10.0.0.2", 00:19:43.693 "trsvcid": "4420" 00:19:43.693 }, 00:19:43.693 "peer_address": { 00:19:43.693 "trtype": "TCP", 00:19:43.693 "adrfam": "IPv4", 00:19:43.693 "traddr": "10.0.0.1", 00:19:43.693 "trsvcid": "57152" 00:19:43.693 }, 00:19:43.693 "auth": { 00:19:43.693 "state": "completed", 00:19:43.693 "digest": "sha384", 00:19:43.693 "dhgroup": "ffdhe2048" 00:19:43.693 } 00:19:43.693 } 00:19:43.693 ]' 00:19:43.693 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.693 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.693 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.693 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:43.694 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.952 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.952 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.952 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.212 01:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:19:45.149 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.149 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.149 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.149 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.149 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.149 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.149 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:45.149 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:45.407 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:45.407 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.407 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:45.407 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:45.407 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:45.407 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.407 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.407 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.407 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.408 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.408 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.408 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.666 00:19:45.666 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.666 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.666 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.924 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.924 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.924 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.924 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.924 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.924 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.924 { 00:19:45.924 "cntlid": 61, 00:19:45.924 "qid": 0, 00:19:45.924 "state": "enabled", 00:19:45.924 "thread": "nvmf_tgt_poll_group_000", 00:19:45.924 "listen_address": { 00:19:45.924 "trtype": "TCP", 00:19:45.924 "adrfam": "IPv4", 00:19:45.924 "traddr": "10.0.0.2", 00:19:45.924 "trsvcid": "4420" 00:19:45.924 }, 00:19:45.924 "peer_address": { 00:19:45.924 "trtype": "TCP", 00:19:45.924 "adrfam": "IPv4", 00:19:45.924 "traddr": "10.0.0.1", 00:19:45.924 "trsvcid": "57162" 00:19:45.924 }, 00:19:45.924 "auth": { 00:19:45.924 "state": "completed", 00:19:45.924 "digest": "sha384", 00:19:45.924 "dhgroup": "ffdhe2048" 00:19:45.924 } 00:19:45.924 } 00:19:45.924 ]' 00:19:45.924 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.924 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.924 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.182 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:46.182 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.183 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.183 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.183 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.442 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:19:47.383 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.383 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.383 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.383 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.383 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.383 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.383 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:47.383 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:47.640 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:47.640 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.640 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:47.640 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:47.640 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:47.640 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.640 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:47.640 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.640 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.640 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.640 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.640 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.899 00:19:47.899 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.899 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.899 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.157 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.157 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.157 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.157 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.157 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.157 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.157 { 00:19:48.157 "cntlid": 63, 00:19:48.157 "qid": 0, 00:19:48.157 "state": "enabled", 00:19:48.157 "thread": "nvmf_tgt_poll_group_000", 00:19:48.157 "listen_address": { 00:19:48.157 "trtype": "TCP", 00:19:48.157 "adrfam": "IPv4", 00:19:48.157 "traddr": "10.0.0.2", 00:19:48.157 "trsvcid": "4420" 00:19:48.157 }, 00:19:48.157 "peer_address": { 00:19:48.157 "trtype": "TCP", 00:19:48.157 "adrfam": "IPv4", 00:19:48.157 "traddr": "10.0.0.1", 00:19:48.157 "trsvcid": "35828" 00:19:48.157 }, 00:19:48.157 "auth": { 00:19:48.157 "state": "completed", 00:19:48.157 "digest": "sha384", 00:19:48.157 "dhgroup": "ffdhe2048" 00:19:48.157 } 00:19:48.157 } 00:19:48.157 ]' 00:19:48.157 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.157 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.157 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.414 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:48.414 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.414 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.414 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.414 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.672 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:19:49.607 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.607 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.607 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.607 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.607 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.607 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.607 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.607 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.607 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.892 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:49.892 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.892 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:49.892 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:49.892 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:49.892 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.892 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.892 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.892 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.892 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.892 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.892 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.149 00:19:50.149 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.149 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.149 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.407 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.407 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.407 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.407 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.407 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.407 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.407 { 00:19:50.407 "cntlid": 65, 00:19:50.407 "qid": 0, 00:19:50.407 "state": "enabled", 00:19:50.407 "thread": "nvmf_tgt_poll_group_000", 00:19:50.407 "listen_address": { 00:19:50.407 "trtype": "TCP", 00:19:50.407 "adrfam": "IPv4", 00:19:50.407 "traddr": "10.0.0.2", 00:19:50.407 "trsvcid": "4420" 00:19:50.407 }, 00:19:50.407 "peer_address": { 00:19:50.407 "trtype": "TCP", 00:19:50.407 "adrfam": "IPv4", 00:19:50.407 "traddr": "10.0.0.1", 00:19:50.407 "trsvcid": "35870" 00:19:50.407 }, 00:19:50.407 "auth": { 00:19:50.407 "state": "completed", 00:19:50.407 "digest": "sha384", 00:19:50.407 "dhgroup": "ffdhe3072" 00:19:50.407 } 00:19:50.407 } 00:19:50.407 ]' 00:19:50.407 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.407 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.407 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.407 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:50.407 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.667 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.667 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.667 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.927 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:19:51.862 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.862 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.862 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.862 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.862 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.862 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.862 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:51.862 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:52.120 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:52.120 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.120 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:52.120 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:52.120 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:52.120 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.120 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.120 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.120 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.120 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.120 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.120 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.377 00:19:52.377 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.377 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.377 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.635 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.635 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.635 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.635 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.635 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.635 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.635 { 00:19:52.635 "cntlid": 67, 00:19:52.635 "qid": 0, 00:19:52.635 "state": "enabled", 00:19:52.635 "thread": "nvmf_tgt_poll_group_000", 00:19:52.635 "listen_address": { 00:19:52.635 "trtype": "TCP", 00:19:52.635 "adrfam": "IPv4", 00:19:52.635 "traddr": "10.0.0.2", 00:19:52.635 "trsvcid": "4420" 00:19:52.635 }, 00:19:52.635 "peer_address": { 00:19:52.635 "trtype": "TCP", 00:19:52.635 "adrfam": "IPv4", 00:19:52.635 "traddr": "10.0.0.1", 00:19:52.635 "trsvcid": "35890" 00:19:52.635 }, 00:19:52.635 "auth": { 00:19:52.635 "state": "completed", 00:19:52.635 "digest": "sha384", 00:19:52.635 "dhgroup": "ffdhe3072" 00:19:52.635 } 00:19:52.635 } 00:19:52.635 ]' 00:19:52.635 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.635 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.635 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.635 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:52.635 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.893 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.893 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.893 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.149 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:19:54.084 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.084 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.084 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.084 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.084 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.084 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.084 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.084 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.341 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:54.341 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.341 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:54.341 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:54.341 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:54.341 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.341 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.341 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.341 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.341 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.341 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.341 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.598 00:19:54.598 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.598 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.598 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.855 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.855 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.855 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.855 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.855 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.855 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.855 { 00:19:54.855 "cntlid": 69, 00:19:54.855 "qid": 0, 00:19:54.855 "state": "enabled", 00:19:54.855 "thread": "nvmf_tgt_poll_group_000", 00:19:54.855 "listen_address": { 00:19:54.855 "trtype": "TCP", 00:19:54.855 "adrfam": "IPv4", 00:19:54.855 "traddr": "10.0.0.2", 00:19:54.855 "trsvcid": "4420" 00:19:54.855 }, 00:19:54.855 "peer_address": { 00:19:54.855 "trtype": "TCP", 00:19:54.855 "adrfam": "IPv4", 00:19:54.856 "traddr": "10.0.0.1", 00:19:54.856 "trsvcid": "35930" 00:19:54.856 }, 00:19:54.856 "auth": { 00:19:54.856 "state": "completed", 00:19:54.856 "digest": "sha384", 00:19:54.856 "dhgroup": "ffdhe3072" 00:19:54.856 } 00:19:54.856 } 00:19:54.856 ]' 00:19:54.856 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.113 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.113 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.113 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:55.113 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.113 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.113 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.113 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.371 01:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:19:56.308 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.308 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.308 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.308 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.308 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.308 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.308 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:56.308 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:56.566 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:56.566 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.566 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:56.566 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:56.566 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:56.566 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.566 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:56.566 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.566 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.566 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.566 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.566 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.824 00:19:56.824 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.824 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.824 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.081 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.081 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.081 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.081 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.081 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.081 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.081 { 00:19:57.081 "cntlid": 71, 00:19:57.081 "qid": 0, 00:19:57.081 "state": "enabled", 00:19:57.081 "thread": "nvmf_tgt_poll_group_000", 00:19:57.081 "listen_address": { 00:19:57.081 "trtype": "TCP", 00:19:57.081 "adrfam": "IPv4", 00:19:57.081 "traddr": "10.0.0.2", 00:19:57.081 "trsvcid": "4420" 00:19:57.081 }, 00:19:57.081 "peer_address": { 00:19:57.081 "trtype": "TCP", 00:19:57.081 "adrfam": "IPv4", 00:19:57.081 "traddr": "10.0.0.1", 00:19:57.081 "trsvcid": "35970" 00:19:57.081 }, 00:19:57.081 "auth": { 00:19:57.081 "state": "completed", 00:19:57.081 "digest": "sha384", 00:19:57.081 "dhgroup": "ffdhe3072" 00:19:57.081 } 00:19:57.081 } 00:19:57.081 ]' 00:19:57.081 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.339 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.339 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.339 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:57.339 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.339 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.339 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.339 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.597 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:19:58.532 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.532 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.532 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.532 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.532 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.532 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.532 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.532 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:58.532 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:58.792 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:58.792 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.792 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:58.792 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:58.792 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:58.792 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.792 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.792 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.792 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.792 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.792 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.792 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.389 00:19:59.389 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.389 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.389 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.647 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.647 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.647 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.647 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.647 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.647 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.647 { 00:19:59.647 "cntlid": 73, 00:19:59.647 "qid": 0, 00:19:59.647 "state": "enabled", 00:19:59.647 "thread": "nvmf_tgt_poll_group_000", 00:19:59.647 "listen_address": { 00:19:59.647 "trtype": "TCP", 00:19:59.647 "adrfam": "IPv4", 00:19:59.647 "traddr": "10.0.0.2", 00:19:59.647 "trsvcid": "4420" 00:19:59.647 }, 00:19:59.647 "peer_address": { 00:19:59.647 "trtype": "TCP", 00:19:59.647 "adrfam": "IPv4", 00:19:59.647 "traddr": "10.0.0.1", 00:19:59.647 "trsvcid": "37078" 00:19:59.647 }, 00:19:59.647 "auth": { 00:19:59.647 "state": "completed", 00:19:59.647 "digest": "sha384", 00:19:59.647 "dhgroup": "ffdhe4096" 00:19:59.647 } 00:19:59.647 } 00:19:59.647 ]' 00:19:59.647 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.647 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.647 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.647 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:59.647 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.647 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.647 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.647 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.905 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:20:00.837 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.837 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.837 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.837 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.837 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.837 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.837 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:00.837 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.094 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:01.094 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.094 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:01.094 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:01.094 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:01.094 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.094 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.094 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.094 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.094 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.094 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.095 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.662 00:20:01.662 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.662 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.662 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.919 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.919 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.919 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.919 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.919 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.919 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.919 { 00:20:01.919 "cntlid": 75, 00:20:01.919 "qid": 0, 00:20:01.919 "state": "enabled", 00:20:01.919 "thread": "nvmf_tgt_poll_group_000", 00:20:01.919 "listen_address": { 00:20:01.919 "trtype": "TCP", 00:20:01.919 "adrfam": "IPv4", 00:20:01.919 "traddr": "10.0.0.2", 00:20:01.919 "trsvcid": "4420" 00:20:01.919 }, 00:20:01.919 "peer_address": { 00:20:01.919 "trtype": "TCP", 00:20:01.919 "adrfam": "IPv4", 00:20:01.919 "traddr": "10.0.0.1", 00:20:01.919 "trsvcid": "37114" 00:20:01.919 }, 00:20:01.919 "auth": { 00:20:01.919 "state": "completed", 00:20:01.919 "digest": "sha384", 00:20:01.919 "dhgroup": "ffdhe4096" 00:20:01.919 } 00:20:01.919 } 00:20:01.919 ]' 00:20:01.919 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.919 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.919 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.919 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:01.919 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.919 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.919 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.919 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.177 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:20:03.114 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.114 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.114 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.114 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.114 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.114 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.114 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.114 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.372 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:03.372 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.372 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.372 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:03.372 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:03.372 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.372 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.372 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.372 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.372 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.372 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.372 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.938 00:20:03.938 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.938 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.938 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.938 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.938 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.938 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.938 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.938 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.938 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.938 { 00:20:03.938 "cntlid": 77, 00:20:03.938 "qid": 0, 00:20:03.938 "state": "enabled", 00:20:03.938 "thread": "nvmf_tgt_poll_group_000", 00:20:03.938 "listen_address": { 00:20:03.938 "trtype": "TCP", 00:20:03.938 "adrfam": "IPv4", 00:20:03.938 "traddr": "10.0.0.2", 00:20:03.938 "trsvcid": "4420" 00:20:03.938 }, 00:20:03.938 "peer_address": { 00:20:03.938 "trtype": "TCP", 00:20:03.938 "adrfam": "IPv4", 00:20:03.938 "traddr": "10.0.0.1", 00:20:03.938 "trsvcid": "37136" 00:20:03.938 }, 00:20:03.938 "auth": { 00:20:03.938 "state": "completed", 00:20:03.938 "digest": "sha384", 00:20:03.938 "dhgroup": "ffdhe4096" 00:20:03.938 } 00:20:03.938 } 00:20:03.938 ]' 00:20:03.938 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.196 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.196 01:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.196 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:04.196 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.196 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.196 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.196 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.454 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:20:05.390 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.390 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.390 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.390 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.390 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.390 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.390 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:05.390 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:05.648 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:05.648 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.648 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:05.648 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:05.648 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:05.648 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.648 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:05.648 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.648 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.648 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.648 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.648 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.908 00:20:06.166 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.166 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.166 01:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.166 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.166 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.166 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.166 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.424 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.424 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.424 { 00:20:06.424 "cntlid": 79, 00:20:06.424 "qid": 0, 00:20:06.424 "state": "enabled", 00:20:06.424 "thread": "nvmf_tgt_poll_group_000", 00:20:06.424 "listen_address": { 00:20:06.424 "trtype": "TCP", 00:20:06.424 "adrfam": "IPv4", 00:20:06.424 "traddr": "10.0.0.2", 00:20:06.424 "trsvcid": "4420" 00:20:06.424 }, 00:20:06.424 "peer_address": { 00:20:06.424 "trtype": "TCP", 00:20:06.424 "adrfam": "IPv4", 00:20:06.424 "traddr": "10.0.0.1", 00:20:06.424 "trsvcid": "37170" 00:20:06.424 }, 00:20:06.424 "auth": { 00:20:06.424 "state": "completed", 00:20:06.424 "digest": "sha384", 00:20:06.424 "dhgroup": "ffdhe4096" 00:20:06.425 } 00:20:06.425 } 00:20:06.425 ]' 00:20:06.425 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.425 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.425 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.425 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:06.425 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.425 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.425 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.425 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.683 01:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:20:07.618 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.618 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.618 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.618 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.618 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.618 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.618 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.618 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:07.618 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:07.877 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:07.877 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.877 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.877 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:07.877 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:07.877 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.877 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.877 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.877 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.877 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.877 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.877 01:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.445 00:20:08.445 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.445 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.445 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.703 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.703 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.703 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.703 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.703 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.703 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.703 { 00:20:08.703 "cntlid": 81, 00:20:08.703 "qid": 0, 00:20:08.703 "state": "enabled", 00:20:08.703 "thread": "nvmf_tgt_poll_group_000", 00:20:08.703 "listen_address": { 00:20:08.703 "trtype": "TCP", 00:20:08.703 "adrfam": "IPv4", 00:20:08.703 "traddr": "10.0.0.2", 00:20:08.703 "trsvcid": "4420" 00:20:08.703 }, 00:20:08.703 "peer_address": { 00:20:08.703 "trtype": "TCP", 00:20:08.703 "adrfam": "IPv4", 00:20:08.703 "traddr": "10.0.0.1", 00:20:08.703 "trsvcid": "48792" 00:20:08.703 }, 00:20:08.703 "auth": { 00:20:08.703 "state": "completed", 00:20:08.703 "digest": "sha384", 00:20:08.703 "dhgroup": "ffdhe6144" 00:20:08.703 } 00:20:08.703 } 00:20:08.703 ]' 00:20:08.703 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.703 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.703 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.703 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:08.703 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.703 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.703 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.703 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.963 01:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:20:10.342 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.342 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.342 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.342 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.342 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.342 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.342 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.342 01:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.342 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:10.342 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.342 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:10.342 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:10.342 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:10.342 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.342 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.342 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.342 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.342 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.342 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.342 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.906 00:20:10.906 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.906 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.906 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.163 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.163 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.163 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.163 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.163 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.163 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.163 { 00:20:11.163 "cntlid": 83, 00:20:11.163 "qid": 0, 00:20:11.163 "state": "enabled", 00:20:11.163 "thread": "nvmf_tgt_poll_group_000", 00:20:11.163 "listen_address": { 00:20:11.163 "trtype": "TCP", 00:20:11.163 "adrfam": "IPv4", 00:20:11.163 "traddr": "10.0.0.2", 00:20:11.163 "trsvcid": "4420" 00:20:11.163 }, 00:20:11.163 "peer_address": { 00:20:11.163 "trtype": "TCP", 00:20:11.163 "adrfam": "IPv4", 00:20:11.163 "traddr": "10.0.0.1", 00:20:11.163 "trsvcid": "48828" 00:20:11.163 }, 00:20:11.163 "auth": { 00:20:11.163 "state": "completed", 00:20:11.163 "digest": "sha384", 00:20:11.163 "dhgroup": "ffdhe6144" 00:20:11.163 } 00:20:11.163 } 00:20:11.163 ]' 00:20:11.163 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.163 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.163 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.163 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:11.163 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.163 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.163 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.163 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.420 01:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:20:12.351 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.351 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.351 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.351 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.351 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.351 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.351 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:12.351 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:12.609 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:12.609 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.609 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.609 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:12.609 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:12.609 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.609 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.609 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.609 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.609 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.609 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.609 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.172 00:20:13.172 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.172 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.172 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.429 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.429 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.429 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.429 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.429 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.429 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.429 { 00:20:13.429 "cntlid": 85, 00:20:13.429 "qid": 0, 00:20:13.429 "state": "enabled", 00:20:13.429 "thread": "nvmf_tgt_poll_group_000", 00:20:13.429 "listen_address": { 00:20:13.429 "trtype": "TCP", 00:20:13.429 "adrfam": "IPv4", 00:20:13.429 "traddr": "10.0.0.2", 00:20:13.429 "trsvcid": "4420" 00:20:13.429 }, 00:20:13.429 "peer_address": { 00:20:13.429 "trtype": "TCP", 00:20:13.429 "adrfam": "IPv4", 00:20:13.429 "traddr": "10.0.0.1", 00:20:13.429 "trsvcid": "48860" 00:20:13.429 }, 00:20:13.429 "auth": { 00:20:13.429 "state": "completed", 00:20:13.429 "digest": "sha384", 00:20:13.429 "dhgroup": "ffdhe6144" 00:20:13.429 } 00:20:13.429 } 00:20:13.429 ]' 00:20:13.429 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.429 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.429 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.686 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:13.686 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.686 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.686 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.686 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.943 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:20:14.873 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.873 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.873 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.873 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.873 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.873 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.873 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:14.873 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:15.131 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:15.131 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.131 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.131 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:15.131 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:15.131 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.131 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:15.131 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.131 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.131 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.131 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.131 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.722 00:20:15.722 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.722 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.722 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.980 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.980 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.980 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.980 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.980 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.980 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.980 { 00:20:15.980 "cntlid": 87, 00:20:15.980 "qid": 0, 00:20:15.980 "state": "enabled", 00:20:15.980 "thread": "nvmf_tgt_poll_group_000", 00:20:15.980 "listen_address": { 00:20:15.980 "trtype": "TCP", 00:20:15.980 "adrfam": "IPv4", 00:20:15.980 "traddr": "10.0.0.2", 00:20:15.980 "trsvcid": "4420" 00:20:15.980 }, 00:20:15.980 "peer_address": { 00:20:15.980 "trtype": "TCP", 00:20:15.980 "adrfam": "IPv4", 00:20:15.980 "traddr": "10.0.0.1", 00:20:15.980 "trsvcid": "48898" 00:20:15.980 }, 00:20:15.980 "auth": { 00:20:15.980 "state": "completed", 00:20:15.980 "digest": "sha384", 00:20:15.980 "dhgroup": "ffdhe6144" 00:20:15.980 } 00:20:15.980 } 00:20:15.980 ]' 00:20:15.980 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.980 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.980 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.980 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:15.980 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.980 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.980 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.980 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.237 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.609 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.543 00:20:18.543 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.543 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.543 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.801 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.802 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.802 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.802 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.802 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.802 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.802 { 00:20:18.802 "cntlid": 89, 00:20:18.802 "qid": 0, 00:20:18.802 "state": "enabled", 00:20:18.802 "thread": "nvmf_tgt_poll_group_000", 00:20:18.802 "listen_address": { 00:20:18.802 "trtype": "TCP", 00:20:18.802 "adrfam": "IPv4", 00:20:18.802 "traddr": "10.0.0.2", 00:20:18.802 "trsvcid": "4420" 00:20:18.802 }, 00:20:18.802 "peer_address": { 00:20:18.802 "trtype": "TCP", 00:20:18.802 "adrfam": "IPv4", 00:20:18.802 "traddr": "10.0.0.1", 00:20:18.802 "trsvcid": "34868" 00:20:18.802 }, 00:20:18.802 "auth": { 00:20:18.802 "state": "completed", 00:20:18.802 "digest": "sha384", 00:20:18.802 "dhgroup": "ffdhe8192" 00:20:18.802 } 00:20:18.802 } 00:20:18.802 ]' 00:20:18.802 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.802 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.802 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.802 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:18.802 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.802 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.802 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.802 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.368 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:20:20.300 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.300 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.300 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.300 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.300 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.300 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.300 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.300 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.558 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:20.558 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.558 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.558 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:20.558 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:20.558 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.558 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.558 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.558 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.558 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.558 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.558 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.491 00:20:21.491 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.491 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.491 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.491 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.491 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.491 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.491 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.491 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.491 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.491 { 00:20:21.491 "cntlid": 91, 00:20:21.491 "qid": 0, 00:20:21.491 "state": "enabled", 00:20:21.491 "thread": "nvmf_tgt_poll_group_000", 00:20:21.491 "listen_address": { 00:20:21.491 "trtype": "TCP", 00:20:21.491 "adrfam": "IPv4", 00:20:21.491 "traddr": "10.0.0.2", 00:20:21.491 "trsvcid": "4420" 00:20:21.491 }, 00:20:21.491 "peer_address": { 00:20:21.491 "trtype": "TCP", 00:20:21.491 "adrfam": "IPv4", 00:20:21.491 "traddr": "10.0.0.1", 00:20:21.491 "trsvcid": "34890" 00:20:21.491 }, 00:20:21.491 "auth": { 00:20:21.492 "state": "completed", 00:20:21.492 "digest": "sha384", 00:20:21.492 "dhgroup": "ffdhe8192" 00:20:21.492 } 00:20:21.492 } 00:20:21.492 ]' 00:20:21.492 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.749 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.749 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.749 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:21.749 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.749 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.749 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.749 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.006 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:20:22.939 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.939 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.939 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.939 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.939 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.939 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.939 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:22.939 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:23.196 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:23.196 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.196 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.196 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:23.196 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:23.196 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.196 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.196 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.196 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.196 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.196 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.196 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.129 00:20:24.129 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.129 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.129 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.387 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.387 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.387 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.387 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.387 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.387 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.387 { 00:20:24.387 "cntlid": 93, 00:20:24.387 "qid": 0, 00:20:24.387 "state": "enabled", 00:20:24.387 "thread": "nvmf_tgt_poll_group_000", 00:20:24.387 "listen_address": { 00:20:24.387 "trtype": "TCP", 00:20:24.387 "adrfam": "IPv4", 00:20:24.387 "traddr": "10.0.0.2", 00:20:24.387 "trsvcid": "4420" 00:20:24.387 }, 00:20:24.387 "peer_address": { 00:20:24.387 "trtype": "TCP", 00:20:24.387 "adrfam": "IPv4", 00:20:24.387 "traddr": "10.0.0.1", 00:20:24.387 "trsvcid": "34914" 00:20:24.387 }, 00:20:24.387 "auth": { 00:20:24.387 "state": "completed", 00:20:24.387 "digest": "sha384", 00:20:24.387 "dhgroup": "ffdhe8192" 00:20:24.387 } 00:20:24.387 } 00:20:24.387 ]' 00:20:24.387 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.387 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.387 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.387 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.387 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.387 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.387 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.387 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.645 01:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:20:25.578 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.578 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.578 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.578 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.578 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.578 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.578 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:25.578 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:25.836 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:25.836 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.836 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.836 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:25.836 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:25.836 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.836 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:25.836 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.836 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.836 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.836 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.837 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.772 00:20:26.772 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.772 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.772 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.030 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.030 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.030 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.030 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.030 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.030 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.030 { 00:20:27.030 "cntlid": 95, 00:20:27.030 "qid": 0, 00:20:27.030 "state": "enabled", 00:20:27.030 "thread": "nvmf_tgt_poll_group_000", 00:20:27.030 "listen_address": { 00:20:27.030 "trtype": "TCP", 00:20:27.030 "adrfam": "IPv4", 00:20:27.030 "traddr": "10.0.0.2", 00:20:27.030 "trsvcid": "4420" 00:20:27.030 }, 00:20:27.030 "peer_address": { 00:20:27.030 "trtype": "TCP", 00:20:27.030 "adrfam": "IPv4", 00:20:27.030 "traddr": "10.0.0.1", 00:20:27.030 "trsvcid": "34938" 00:20:27.030 }, 00:20:27.030 "auth": { 00:20:27.030 "state": "completed", 00:20:27.030 "digest": "sha384", 00:20:27.030 "dhgroup": "ffdhe8192" 00:20:27.030 } 00:20:27.030 } 00:20:27.030 ]' 00:20:27.030 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.030 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.030 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.030 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:27.030 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.290 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.290 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.290 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.548 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:20:28.481 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.481 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.481 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.481 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.481 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.481 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:28.481 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.481 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.481 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:28.481 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:28.739 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:28.739 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.739 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:28.739 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:28.739 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:28.739 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.739 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.739 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.739 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.739 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.739 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.739 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.997 00:20:28.997 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.997 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.997 01:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.255 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.255 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.255 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.255 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.255 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.255 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.255 { 00:20:29.255 "cntlid": 97, 00:20:29.255 "qid": 0, 00:20:29.255 "state": "enabled", 00:20:29.255 "thread": "nvmf_tgt_poll_group_000", 00:20:29.255 "listen_address": { 00:20:29.255 "trtype": "TCP", 00:20:29.255 "adrfam": "IPv4", 00:20:29.255 "traddr": "10.0.0.2", 00:20:29.255 "trsvcid": "4420" 00:20:29.255 }, 00:20:29.255 "peer_address": { 00:20:29.255 "trtype": "TCP", 00:20:29.255 "adrfam": "IPv4", 00:20:29.255 "traddr": "10.0.0.1", 00:20:29.255 "trsvcid": "52308" 00:20:29.255 }, 00:20:29.255 "auth": { 00:20:29.255 "state": "completed", 00:20:29.255 "digest": "sha512", 00:20:29.255 "dhgroup": "null" 00:20:29.255 } 00:20:29.255 } 00:20:29.255 ]' 00:20:29.255 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.255 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.255 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.255 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:29.255 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.255 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.255 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.255 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.513 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:20:30.447 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.447 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.447 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.447 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.447 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.447 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.447 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:30.447 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:30.705 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:30.705 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.705 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:30.705 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:30.705 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:30.705 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.705 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.705 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.705 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.705 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.705 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.705 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.964 00:20:31.224 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.224 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.224 01:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.482 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.482 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.482 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.482 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.482 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.482 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.482 { 00:20:31.482 "cntlid": 99, 00:20:31.482 "qid": 0, 00:20:31.482 "state": "enabled", 00:20:31.482 "thread": "nvmf_tgt_poll_group_000", 00:20:31.482 "listen_address": { 00:20:31.482 "trtype": "TCP", 00:20:31.482 "adrfam": "IPv4", 00:20:31.482 "traddr": "10.0.0.2", 00:20:31.482 "trsvcid": "4420" 00:20:31.482 }, 00:20:31.482 "peer_address": { 00:20:31.482 "trtype": "TCP", 00:20:31.482 "adrfam": "IPv4", 00:20:31.482 "traddr": "10.0.0.1", 00:20:31.482 "trsvcid": "52332" 00:20:31.482 }, 00:20:31.482 "auth": { 00:20:31.482 "state": "completed", 00:20:31.482 "digest": "sha512", 00:20:31.482 "dhgroup": "null" 00:20:31.482 } 00:20:31.482 } 00:20:31.482 ]' 00:20:31.482 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.482 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.482 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.482 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:31.482 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.482 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.482 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.482 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.740 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:20:32.706 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.706 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.706 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.706 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.706 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.706 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.706 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:32.706 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:32.964 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:32.964 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.964 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:32.964 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:32.964 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:32.964 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.964 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.964 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.964 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.964 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.964 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.964 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.222 00:20:33.222 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.222 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.222 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.479 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.479 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.479 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.479 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.479 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.479 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.479 { 00:20:33.479 "cntlid": 101, 00:20:33.479 "qid": 0, 00:20:33.479 "state": "enabled", 00:20:33.479 "thread": "nvmf_tgt_poll_group_000", 00:20:33.479 "listen_address": { 00:20:33.479 "trtype": "TCP", 00:20:33.479 "adrfam": "IPv4", 00:20:33.479 "traddr": "10.0.0.2", 00:20:33.479 "trsvcid": "4420" 00:20:33.479 }, 00:20:33.479 "peer_address": { 00:20:33.479 "trtype": "TCP", 00:20:33.479 "adrfam": "IPv4", 00:20:33.479 "traddr": "10.0.0.1", 00:20:33.479 "trsvcid": "52366" 00:20:33.479 }, 00:20:33.479 "auth": { 00:20:33.479 "state": "completed", 00:20:33.479 "digest": "sha512", 00:20:33.479 "dhgroup": "null" 00:20:33.479 } 00:20:33.479 } 00:20:33.479 ]' 00:20:33.479 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.737 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.737 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.737 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:33.737 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.737 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.737 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.737 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.994 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:20:34.927 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.927 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.927 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.927 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.927 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.927 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.927 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:34.927 01:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:35.185 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:35.185 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.185 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:35.185 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:35.185 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:35.185 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.185 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:35.185 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.185 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.185 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.185 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.185 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.442 00:20:35.442 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.442 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.442 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.698 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.698 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.698 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.698 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.698 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.698 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.698 { 00:20:35.698 "cntlid": 103, 00:20:35.698 "qid": 0, 00:20:35.698 "state": "enabled", 00:20:35.698 "thread": "nvmf_tgt_poll_group_000", 00:20:35.698 "listen_address": { 00:20:35.698 "trtype": "TCP", 00:20:35.698 "adrfam": "IPv4", 00:20:35.698 "traddr": "10.0.0.2", 00:20:35.698 "trsvcid": "4420" 00:20:35.698 }, 00:20:35.698 "peer_address": { 00:20:35.698 "trtype": "TCP", 00:20:35.698 "adrfam": "IPv4", 00:20:35.698 "traddr": "10.0.0.1", 00:20:35.698 "trsvcid": "52410" 00:20:35.698 }, 00:20:35.698 "auth": { 00:20:35.699 "state": "completed", 00:20:35.699 "digest": "sha512", 00:20:35.699 "dhgroup": "null" 00:20:35.699 } 00:20:35.699 } 00:20:35.699 ]' 00:20:35.699 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.699 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.699 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.955 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:35.955 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.955 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.955 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.955 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.212 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:20:37.145 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.145 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.145 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.145 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.145 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.145 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.145 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.145 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:37.145 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:37.402 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:37.402 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.402 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:37.402 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:37.402 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:37.402 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.402 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.402 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.402 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.402 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.402 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.402 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.660 00:20:37.660 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.660 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.660 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.919 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.919 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.919 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.919 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.919 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.919 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.919 { 00:20:37.919 "cntlid": 105, 00:20:37.919 "qid": 0, 00:20:37.919 "state": "enabled", 00:20:37.919 "thread": "nvmf_tgt_poll_group_000", 00:20:37.919 "listen_address": { 00:20:37.919 "trtype": "TCP", 00:20:37.919 "adrfam": "IPv4", 00:20:37.919 "traddr": "10.0.0.2", 00:20:37.919 "trsvcid": "4420" 00:20:37.919 }, 00:20:37.919 "peer_address": { 00:20:37.919 "trtype": "TCP", 00:20:37.919 "adrfam": "IPv4", 00:20:37.919 "traddr": "10.0.0.1", 00:20:37.919 "trsvcid": "33542" 00:20:37.919 }, 00:20:37.919 "auth": { 00:20:37.919 "state": "completed", 00:20:37.919 "digest": "sha512", 00:20:37.919 "dhgroup": "ffdhe2048" 00:20:37.919 } 00:20:37.919 } 00:20:37.919 ]' 00:20:37.919 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.919 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.919 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.177 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:38.177 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.177 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.177 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.177 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.435 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:20:39.369 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.369 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.369 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.369 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.369 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.369 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.369 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:39.369 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:39.626 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:39.626 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.626 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:39.626 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:39.626 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:39.626 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.626 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.626 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.626 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.626 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.626 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.626 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.884 00:20:39.884 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.884 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.884 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.142 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.142 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.142 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.142 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.142 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.143 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.143 { 00:20:40.143 "cntlid": 107, 00:20:40.143 "qid": 0, 00:20:40.143 "state": "enabled", 00:20:40.143 "thread": "nvmf_tgt_poll_group_000", 00:20:40.143 "listen_address": { 00:20:40.143 "trtype": "TCP", 00:20:40.143 "adrfam": "IPv4", 00:20:40.143 "traddr": "10.0.0.2", 00:20:40.143 "trsvcid": "4420" 00:20:40.143 }, 00:20:40.143 "peer_address": { 00:20:40.143 "trtype": "TCP", 00:20:40.143 "adrfam": "IPv4", 00:20:40.143 "traddr": "10.0.0.1", 00:20:40.143 "trsvcid": "33572" 00:20:40.143 }, 00:20:40.143 "auth": { 00:20:40.143 "state": "completed", 00:20:40.143 "digest": "sha512", 00:20:40.143 "dhgroup": "ffdhe2048" 00:20:40.143 } 00:20:40.143 } 00:20:40.143 ]' 00:20:40.143 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.143 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.143 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.143 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:40.143 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.143 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.143 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.143 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.402 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:20:41.339 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.596 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.596 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.596 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.596 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.596 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.596 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:41.596 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:41.853 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:41.853 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.853 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:41.853 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:41.853 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:41.853 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.853 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.853 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.853 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.853 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.853 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.853 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.110 00:20:42.110 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.110 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.111 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.368 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.368 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.368 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.368 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.368 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.368 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.368 { 00:20:42.368 "cntlid": 109, 00:20:42.368 "qid": 0, 00:20:42.368 "state": "enabled", 00:20:42.368 "thread": "nvmf_tgt_poll_group_000", 00:20:42.368 "listen_address": { 00:20:42.368 "trtype": "TCP", 00:20:42.368 "adrfam": "IPv4", 00:20:42.368 "traddr": "10.0.0.2", 00:20:42.368 "trsvcid": "4420" 00:20:42.368 }, 00:20:42.368 "peer_address": { 00:20:42.368 "trtype": "TCP", 00:20:42.368 "adrfam": "IPv4", 00:20:42.368 "traddr": "10.0.0.1", 00:20:42.368 "trsvcid": "33600" 00:20:42.368 }, 00:20:42.368 "auth": { 00:20:42.368 "state": "completed", 00:20:42.368 "digest": "sha512", 00:20:42.368 "dhgroup": "ffdhe2048" 00:20:42.368 } 00:20:42.368 } 00:20:42.368 ]' 00:20:42.368 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.368 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.368 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.368 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:42.368 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.368 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.368 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.368 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.626 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:20:43.559 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.559 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.559 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.559 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.817 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.817 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.817 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.817 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.817 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:43.817 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.817 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:43.817 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:43.817 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:43.817 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.817 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:43.817 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.817 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.817 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.817 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.817 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.382 00:20:44.382 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.382 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.382 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.382 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.383 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.383 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.383 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.641 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.641 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.641 { 00:20:44.641 "cntlid": 111, 00:20:44.641 "qid": 0, 00:20:44.641 "state": "enabled", 00:20:44.641 "thread": "nvmf_tgt_poll_group_000", 00:20:44.641 "listen_address": { 00:20:44.641 "trtype": "TCP", 00:20:44.641 "adrfam": "IPv4", 00:20:44.641 "traddr": "10.0.0.2", 00:20:44.641 "trsvcid": "4420" 00:20:44.641 }, 00:20:44.641 "peer_address": { 00:20:44.641 "trtype": "TCP", 00:20:44.641 "adrfam": "IPv4", 00:20:44.641 "traddr": "10.0.0.1", 00:20:44.641 "trsvcid": "33636" 00:20:44.641 }, 00:20:44.641 "auth": { 00:20:44.641 "state": "completed", 00:20:44.641 "digest": "sha512", 00:20:44.641 "dhgroup": "ffdhe2048" 00:20:44.641 } 00:20:44.641 } 00:20:44.641 ]' 00:20:44.641 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.641 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.641 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.641 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:44.641 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.641 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.641 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.641 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.899 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:20:45.831 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.831 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.831 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.831 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.831 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.831 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.831 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.832 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:45.832 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:46.090 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:46.090 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.090 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:46.090 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:46.090 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:46.090 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.090 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.090 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.090 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.090 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.090 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.090 01:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.348 00:20:46.348 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.348 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.348 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.606 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.606 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.606 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.606 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.606 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.606 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.606 { 00:20:46.606 "cntlid": 113, 00:20:46.606 "qid": 0, 00:20:46.606 "state": "enabled", 00:20:46.606 "thread": "nvmf_tgt_poll_group_000", 00:20:46.606 "listen_address": { 00:20:46.606 "trtype": "TCP", 00:20:46.606 "adrfam": "IPv4", 00:20:46.606 "traddr": "10.0.0.2", 00:20:46.606 "trsvcid": "4420" 00:20:46.606 }, 00:20:46.606 "peer_address": { 00:20:46.606 "trtype": "TCP", 00:20:46.606 "adrfam": "IPv4", 00:20:46.606 "traddr": "10.0.0.1", 00:20:46.606 "trsvcid": "33680" 00:20:46.606 }, 00:20:46.606 "auth": { 00:20:46.606 "state": "completed", 00:20:46.606 "digest": "sha512", 00:20:46.606 "dhgroup": "ffdhe3072" 00:20:46.606 } 00:20:46.606 } 00:20:46.606 ]' 00:20:46.606 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.606 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.606 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.863 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:46.863 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.863 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.863 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.863 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.121 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:20:48.052 01:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.052 01:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.052 01:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.052 01:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.052 01:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.052 01:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.053 01:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:48.053 01:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:48.310 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:48.310 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.310 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:48.310 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:48.310 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:48.310 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.310 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.310 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.310 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.310 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.310 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.310 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.587 00:20:48.587 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.587 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.587 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.862 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.862 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.862 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.862 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.862 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.862 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.862 { 00:20:48.862 "cntlid": 115, 00:20:48.862 "qid": 0, 00:20:48.862 "state": "enabled", 00:20:48.862 "thread": "nvmf_tgt_poll_group_000", 00:20:48.862 "listen_address": { 00:20:48.862 "trtype": "TCP", 00:20:48.862 "adrfam": "IPv4", 00:20:48.862 "traddr": "10.0.0.2", 00:20:48.862 "trsvcid": "4420" 00:20:48.862 }, 00:20:48.862 "peer_address": { 00:20:48.862 "trtype": "TCP", 00:20:48.862 "adrfam": "IPv4", 00:20:48.862 "traddr": "10.0.0.1", 00:20:48.862 "trsvcid": "60866" 00:20:48.862 }, 00:20:48.862 "auth": { 00:20:48.862 "state": "completed", 00:20:48.862 "digest": "sha512", 00:20:48.862 "dhgroup": "ffdhe3072" 00:20:48.862 } 00:20:48.862 } 00:20:48.862 ]' 00:20:48.862 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.862 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.862 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.862 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:48.862 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.120 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.120 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.120 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.378 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:20:50.311 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.311 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.311 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.311 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.311 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.311 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.311 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:50.311 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:50.568 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:50.568 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.569 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:50.569 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:50.569 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:50.569 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.569 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.569 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.569 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.569 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.569 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.569 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.826 00:20:50.826 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.826 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.826 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.084 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.084 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.084 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.084 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.084 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.084 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.084 { 00:20:51.084 "cntlid": 117, 00:20:51.084 "qid": 0, 00:20:51.084 "state": "enabled", 00:20:51.084 "thread": "nvmf_tgt_poll_group_000", 00:20:51.084 "listen_address": { 00:20:51.084 "trtype": "TCP", 00:20:51.084 "adrfam": "IPv4", 00:20:51.084 "traddr": "10.0.0.2", 00:20:51.084 "trsvcid": "4420" 00:20:51.084 }, 00:20:51.084 "peer_address": { 00:20:51.084 "trtype": "TCP", 00:20:51.084 "adrfam": "IPv4", 00:20:51.084 "traddr": "10.0.0.1", 00:20:51.084 "trsvcid": "60884" 00:20:51.084 }, 00:20:51.084 "auth": { 00:20:51.084 "state": "completed", 00:20:51.084 "digest": "sha512", 00:20:51.084 "dhgroup": "ffdhe3072" 00:20:51.084 } 00:20:51.084 } 00:20:51.084 ]' 00:20:51.084 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.084 01:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.084 01:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.084 01:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:51.084 01:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.084 01:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.084 01:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.084 01:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.342 01:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:20:52.273 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.273 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.273 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.273 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.273 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.273 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.273 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:52.273 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:52.531 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:52.531 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.531 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:52.531 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:52.531 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:52.531 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.531 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:52.531 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.531 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.788 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.788 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.788 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.044 00:20:53.044 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.044 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.044 01:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.302 01:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.302 01:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.302 01:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.302 01:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.302 01:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.302 01:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.302 { 00:20:53.302 "cntlid": 119, 00:20:53.302 "qid": 0, 00:20:53.302 "state": "enabled", 00:20:53.302 "thread": "nvmf_tgt_poll_group_000", 00:20:53.302 "listen_address": { 00:20:53.302 "trtype": "TCP", 00:20:53.302 "adrfam": "IPv4", 00:20:53.302 "traddr": "10.0.0.2", 00:20:53.302 "trsvcid": "4420" 00:20:53.302 }, 00:20:53.302 "peer_address": { 00:20:53.302 "trtype": "TCP", 00:20:53.302 "adrfam": "IPv4", 00:20:53.302 "traddr": "10.0.0.1", 00:20:53.302 "trsvcid": "60908" 00:20:53.302 }, 00:20:53.302 "auth": { 00:20:53.302 "state": "completed", 00:20:53.302 "digest": "sha512", 00:20:53.302 "dhgroup": "ffdhe3072" 00:20:53.302 } 00:20:53.302 } 00:20:53.302 ]' 00:20:53.302 01:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.302 01:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.302 01:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.302 01:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:53.302 01:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.302 01:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.302 01:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.302 01:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.865 01:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:20:54.798 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.798 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.798 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.798 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.798 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.798 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.798 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.798 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:54.798 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:55.056 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:55.056 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.056 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.056 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:55.056 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:55.056 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.056 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.056 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.056 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.056 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.056 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.056 01:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.314 00:20:55.314 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.314 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.314 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.572 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.572 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.572 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.572 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.572 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.572 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.572 { 00:20:55.572 "cntlid": 121, 00:20:55.572 "qid": 0, 00:20:55.572 "state": "enabled", 00:20:55.572 "thread": "nvmf_tgt_poll_group_000", 00:20:55.572 "listen_address": { 00:20:55.572 "trtype": "TCP", 00:20:55.572 "adrfam": "IPv4", 00:20:55.572 "traddr": "10.0.0.2", 00:20:55.572 "trsvcid": "4420" 00:20:55.572 }, 00:20:55.572 "peer_address": { 00:20:55.572 "trtype": "TCP", 00:20:55.572 "adrfam": "IPv4", 00:20:55.572 "traddr": "10.0.0.1", 00:20:55.572 "trsvcid": "60922" 00:20:55.572 }, 00:20:55.572 "auth": { 00:20:55.572 "state": "completed", 00:20:55.572 "digest": "sha512", 00:20:55.572 "dhgroup": "ffdhe4096" 00:20:55.572 } 00:20:55.572 } 00:20:55.572 ]' 00:20:55.572 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.572 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.572 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.830 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:55.830 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.830 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.830 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.830 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.088 01:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:20:57.022 01:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.022 01:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.022 01:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.022 01:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.022 01:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.022 01:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.022 01:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.022 01:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.280 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:57.280 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.280 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:57.280 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:57.280 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:57.280 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.280 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.280 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.280 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.280 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.280 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.280 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.538 00:20:57.795 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.795 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.795 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.053 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.053 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.053 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.053 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.054 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.054 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.054 { 00:20:58.054 "cntlid": 123, 00:20:58.054 "qid": 0, 00:20:58.054 "state": "enabled", 00:20:58.054 "thread": "nvmf_tgt_poll_group_000", 00:20:58.054 "listen_address": { 00:20:58.054 "trtype": "TCP", 00:20:58.054 "adrfam": "IPv4", 00:20:58.054 "traddr": "10.0.0.2", 00:20:58.054 "trsvcid": "4420" 00:20:58.054 }, 00:20:58.054 "peer_address": { 00:20:58.054 "trtype": "TCP", 00:20:58.054 "adrfam": "IPv4", 00:20:58.054 "traddr": "10.0.0.1", 00:20:58.054 "trsvcid": "43152" 00:20:58.054 }, 00:20:58.054 "auth": { 00:20:58.054 "state": "completed", 00:20:58.054 "digest": "sha512", 00:20:58.054 "dhgroup": "ffdhe4096" 00:20:58.054 } 00:20:58.054 } 00:20:58.054 ]' 00:20:58.054 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.054 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.054 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.054 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:58.054 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.054 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.054 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.054 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.311 01:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:20:59.242 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.242 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.242 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.242 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.242 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.242 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.242 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:59.242 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:59.499 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:59.499 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.499 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:59.499 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:59.499 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:59.499 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.499 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.499 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.499 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.499 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.499 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.499 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.064 00:21:00.064 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.064 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.064 01:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.064 01:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.064 01:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.064 01:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.064 01:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.064 01:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.064 01:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.064 { 00:21:00.064 "cntlid": 125, 00:21:00.064 "qid": 0, 00:21:00.064 "state": "enabled", 00:21:00.064 "thread": "nvmf_tgt_poll_group_000", 00:21:00.064 "listen_address": { 00:21:00.064 "trtype": "TCP", 00:21:00.064 "adrfam": "IPv4", 00:21:00.064 "traddr": "10.0.0.2", 00:21:00.064 "trsvcid": "4420" 00:21:00.064 }, 00:21:00.064 "peer_address": { 00:21:00.064 "trtype": "TCP", 00:21:00.064 "adrfam": "IPv4", 00:21:00.064 "traddr": "10.0.0.1", 00:21:00.064 "trsvcid": "43178" 00:21:00.064 }, 00:21:00.064 "auth": { 00:21:00.064 "state": "completed", 00:21:00.064 "digest": "sha512", 00:21:00.064 "dhgroup": "ffdhe4096" 00:21:00.064 } 00:21:00.064 } 00:21:00.064 ]' 00:21:00.064 01:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.321 01:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.321 01:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.321 01:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.321 01:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.321 01:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.321 01:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.321 01:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.579 01:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:21:01.510 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.510 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.510 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.510 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.510 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.510 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.510 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.510 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.767 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:01.767 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.767 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.767 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:01.767 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:01.767 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.767 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:01.767 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.767 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.767 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.767 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.767 01:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.331 00:21:02.332 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.332 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.332 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.589 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.589 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.589 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.589 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.589 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.589 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.589 { 00:21:02.589 "cntlid": 127, 00:21:02.589 "qid": 0, 00:21:02.589 "state": "enabled", 00:21:02.589 "thread": "nvmf_tgt_poll_group_000", 00:21:02.589 "listen_address": { 00:21:02.589 "trtype": "TCP", 00:21:02.589 "adrfam": "IPv4", 00:21:02.589 "traddr": "10.0.0.2", 00:21:02.589 "trsvcid": "4420" 00:21:02.589 }, 00:21:02.589 "peer_address": { 00:21:02.589 "trtype": "TCP", 00:21:02.589 "adrfam": "IPv4", 00:21:02.589 "traddr": "10.0.0.1", 00:21:02.589 "trsvcid": "43218" 00:21:02.589 }, 00:21:02.589 "auth": { 00:21:02.589 "state": "completed", 00:21:02.589 "digest": "sha512", 00:21:02.589 "dhgroup": "ffdhe4096" 00:21:02.589 } 00:21:02.589 } 00:21:02.589 ]' 00:21:02.589 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.589 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.589 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.589 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.589 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.589 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.589 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.589 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.846 01:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:21:03.779 01:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.779 01:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.779 01:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.779 01:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.779 01:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.779 01:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.779 01:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.779 01:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.779 01:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:04.345 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:04.345 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.345 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.345 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:04.345 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:04.345 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.345 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.345 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.345 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.345 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.345 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.345 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.943 00:21:04.943 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.943 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.943 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.943 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.943 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.943 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.943 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.943 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.943 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.943 { 00:21:04.943 "cntlid": 129, 00:21:04.943 "qid": 0, 00:21:04.943 "state": "enabled", 00:21:04.943 "thread": "nvmf_tgt_poll_group_000", 00:21:04.943 "listen_address": { 00:21:04.943 "trtype": "TCP", 00:21:04.943 "adrfam": "IPv4", 00:21:04.943 "traddr": "10.0.0.2", 00:21:04.943 "trsvcid": "4420" 00:21:04.943 }, 00:21:04.943 "peer_address": { 00:21:04.943 "trtype": "TCP", 00:21:04.943 "adrfam": "IPv4", 00:21:04.943 "traddr": "10.0.0.1", 00:21:04.943 "trsvcid": "43236" 00:21:04.943 }, 00:21:04.943 "auth": { 00:21:04.943 "state": "completed", 00:21:04.943 "digest": "sha512", 00:21:04.943 "dhgroup": "ffdhe6144" 00:21:04.943 } 00:21:04.943 } 00:21:04.943 ]' 00:21:04.943 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.204 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.204 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.204 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:05.204 01:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.204 01:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.204 01:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.204 01:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.463 01:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:21:06.402 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.402 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.402 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.402 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.402 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.402 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.402 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:06.402 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:06.660 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:06.660 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.660 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.660 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:06.660 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:06.660 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.660 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.660 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.660 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.660 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.660 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.660 01:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.226 00:21:07.226 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.226 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.226 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.484 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.484 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.484 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.484 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.484 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.484 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.484 { 00:21:07.484 "cntlid": 131, 00:21:07.484 "qid": 0, 00:21:07.484 "state": "enabled", 00:21:07.484 "thread": "nvmf_tgt_poll_group_000", 00:21:07.484 "listen_address": { 00:21:07.484 "trtype": "TCP", 00:21:07.484 "adrfam": "IPv4", 00:21:07.484 "traddr": "10.0.0.2", 00:21:07.484 "trsvcid": "4420" 00:21:07.484 }, 00:21:07.484 "peer_address": { 00:21:07.484 "trtype": "TCP", 00:21:07.484 "adrfam": "IPv4", 00:21:07.484 "traddr": "10.0.0.1", 00:21:07.484 "trsvcid": "43272" 00:21:07.484 }, 00:21:07.484 "auth": { 00:21:07.484 "state": "completed", 00:21:07.484 "digest": "sha512", 00:21:07.484 "dhgroup": "ffdhe6144" 00:21:07.484 } 00:21:07.484 } 00:21:07.484 ]' 00:21:07.484 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.484 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.484 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.484 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:07.484 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.484 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.484 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.484 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.744 01:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:21:08.679 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.679 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.679 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.679 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.679 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.679 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.679 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.679 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.937 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:08.937 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.937 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.937 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:08.937 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:08.937 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.937 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.937 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.937 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.937 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.937 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.937 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.502 00:21:09.502 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.502 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.502 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.760 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.760 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.760 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.760 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.760 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.760 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.760 { 00:21:09.760 "cntlid": 133, 00:21:09.760 "qid": 0, 00:21:09.760 "state": "enabled", 00:21:09.760 "thread": "nvmf_tgt_poll_group_000", 00:21:09.760 "listen_address": { 00:21:09.760 "trtype": "TCP", 00:21:09.760 "adrfam": "IPv4", 00:21:09.760 "traddr": "10.0.0.2", 00:21:09.760 "trsvcid": "4420" 00:21:09.760 }, 00:21:09.760 "peer_address": { 00:21:09.760 "trtype": "TCP", 00:21:09.760 "adrfam": "IPv4", 00:21:09.760 "traddr": "10.0.0.1", 00:21:09.760 "trsvcid": "36326" 00:21:09.760 }, 00:21:09.760 "auth": { 00:21:09.760 "state": "completed", 00:21:09.760 "digest": "sha512", 00:21:09.760 "dhgroup": "ffdhe6144" 00:21:09.760 } 00:21:09.760 } 00:21:09.760 ]' 00:21:09.760 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.018 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.018 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.018 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:10.018 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.018 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.018 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.018 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.276 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:21:11.209 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.209 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.209 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.209 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.209 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.209 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.210 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:11.210 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:11.467 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:11.467 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.467 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.467 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:11.467 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:11.467 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.467 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:11.467 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.467 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.467 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.467 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.467 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.031 00:21:12.031 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.031 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.031 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.287 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.288 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.288 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.288 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.288 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.288 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.288 { 00:21:12.288 "cntlid": 135, 00:21:12.288 "qid": 0, 00:21:12.288 "state": "enabled", 00:21:12.288 "thread": "nvmf_tgt_poll_group_000", 00:21:12.288 "listen_address": { 00:21:12.288 "trtype": "TCP", 00:21:12.288 "adrfam": "IPv4", 00:21:12.288 "traddr": "10.0.0.2", 00:21:12.288 "trsvcid": "4420" 00:21:12.288 }, 00:21:12.288 "peer_address": { 00:21:12.288 "trtype": "TCP", 00:21:12.288 "adrfam": "IPv4", 00:21:12.288 "traddr": "10.0.0.1", 00:21:12.288 "trsvcid": "36366" 00:21:12.288 }, 00:21:12.288 "auth": { 00:21:12.288 "state": "completed", 00:21:12.288 "digest": "sha512", 00:21:12.288 "dhgroup": "ffdhe6144" 00:21:12.288 } 00:21:12.288 } 00:21:12.288 ]' 00:21:12.288 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.288 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.288 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.288 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:12.288 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.545 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.545 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.545 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.800 01:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:21:13.732 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.732 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.732 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.732 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.732 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.732 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.732 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.732 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.732 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.990 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:13.990 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.990 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.990 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:13.990 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:13.990 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.990 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.990 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.990 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.990 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.990 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.990 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.923 00:21:14.923 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.923 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.923 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.183 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.183 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.183 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.183 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.183 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.183 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.183 { 00:21:15.183 "cntlid": 137, 00:21:15.183 "qid": 0, 00:21:15.183 "state": "enabled", 00:21:15.183 "thread": "nvmf_tgt_poll_group_000", 00:21:15.183 "listen_address": { 00:21:15.183 "trtype": "TCP", 00:21:15.183 "adrfam": "IPv4", 00:21:15.183 "traddr": "10.0.0.2", 00:21:15.183 "trsvcid": "4420" 00:21:15.183 }, 00:21:15.183 "peer_address": { 00:21:15.183 "trtype": "TCP", 00:21:15.183 "adrfam": "IPv4", 00:21:15.183 "traddr": "10.0.0.1", 00:21:15.183 "trsvcid": "36406" 00:21:15.183 }, 00:21:15.183 "auth": { 00:21:15.183 "state": "completed", 00:21:15.183 "digest": "sha512", 00:21:15.183 "dhgroup": "ffdhe8192" 00:21:15.183 } 00:21:15.183 } 00:21:15.183 ]' 00:21:15.183 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.183 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.183 01:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.183 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.183 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.183 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.183 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.183 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.441 01:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:21:16.375 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.375 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.375 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.375 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.375 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.375 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.375 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.375 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.633 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:16.633 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.633 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:16.633 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:16.633 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:16.633 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.633 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.633 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.633 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.633 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.633 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.633 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.566 00:21:17.566 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.566 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.566 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.824 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.824 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.824 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.824 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.824 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.824 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.824 { 00:21:17.824 "cntlid": 139, 00:21:17.824 "qid": 0, 00:21:17.824 "state": "enabled", 00:21:17.824 "thread": "nvmf_tgt_poll_group_000", 00:21:17.824 "listen_address": { 00:21:17.824 "trtype": "TCP", 00:21:17.824 "adrfam": "IPv4", 00:21:17.824 "traddr": "10.0.0.2", 00:21:17.824 "trsvcid": "4420" 00:21:17.824 }, 00:21:17.824 "peer_address": { 00:21:17.824 "trtype": "TCP", 00:21:17.824 "adrfam": "IPv4", 00:21:17.824 "traddr": "10.0.0.1", 00:21:17.824 "trsvcid": "36438" 00:21:17.824 }, 00:21:17.824 "auth": { 00:21:17.824 "state": "completed", 00:21:17.824 "digest": "sha512", 00:21:17.824 "dhgroup": "ffdhe8192" 00:21:17.824 } 00:21:17.824 } 00:21:17.824 ]' 00:21:17.824 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.824 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.824 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.824 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:17.824 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.824 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.824 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.824 01:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.082 01:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MTk0NWU0YjJjMjAzMWRhN2M3ODE3ZGIyYzlkOTJlNTdogNNE: --dhchap-ctrl-secret DHHC-1:02:MWEwMWVhZGUxZDNlZjEzZmU4YWY5MjZhZTk1ZDM3MGI1NjM2MmIzNGE4NDk4ZGQzAf73qA==: 00:21:19.014 01:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.015 01:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.015 01:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.015 01:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.015 01:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.015 01:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.015 01:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.015 01:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.273 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:19.273 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.273 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.273 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:19.273 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:19.273 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.273 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.273 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.273 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.273 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.273 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.273 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.207 00:21:20.207 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.207 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.207 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.465 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.465 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.465 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.465 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.465 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.465 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.465 { 00:21:20.465 "cntlid": 141, 00:21:20.465 "qid": 0, 00:21:20.465 "state": "enabled", 00:21:20.465 "thread": "nvmf_tgt_poll_group_000", 00:21:20.465 "listen_address": { 00:21:20.465 "trtype": "TCP", 00:21:20.465 "adrfam": "IPv4", 00:21:20.465 "traddr": "10.0.0.2", 00:21:20.465 "trsvcid": "4420" 00:21:20.465 }, 00:21:20.465 "peer_address": { 00:21:20.465 "trtype": "TCP", 00:21:20.465 "adrfam": "IPv4", 00:21:20.465 "traddr": "10.0.0.1", 00:21:20.465 "trsvcid": "38354" 00:21:20.465 }, 00:21:20.465 "auth": { 00:21:20.465 "state": "completed", 00:21:20.465 "digest": "sha512", 00:21:20.465 "dhgroup": "ffdhe8192" 00:21:20.465 } 00:21:20.465 } 00:21:20.465 ]' 00:21:20.465 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.465 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.465 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.465 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:20.465 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.723 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.723 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.723 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.980 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTQwYjdjMmY2ODQ1ZTEwZGQ2ZGUzMWUyMTlkN2Q2YmQ5YjMzNDM1MWZjYWVjMmM58v9UgQ==: --dhchap-ctrl-secret DHHC-1:01:ZjUxOGM2OTBjZWE4ZTVhNjNkNjcxMGFkYzM0MmIxM2aWwV8H: 00:21:21.930 01:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.931 01:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.931 01:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.931 01:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.931 01:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.931 01:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.931 01:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:21.931 01:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:22.188 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:22.188 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.188 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.188 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:22.188 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:22.188 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.188 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:22.188 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.188 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.188 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.188 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:22.188 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.121 00:21:23.121 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.121 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.121 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.379 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.379 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.379 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.379 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.379 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.379 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.379 { 00:21:23.379 "cntlid": 143, 00:21:23.379 "qid": 0, 00:21:23.379 "state": "enabled", 00:21:23.379 "thread": "nvmf_tgt_poll_group_000", 00:21:23.379 "listen_address": { 00:21:23.379 "trtype": "TCP", 00:21:23.379 "adrfam": "IPv4", 00:21:23.379 "traddr": "10.0.0.2", 00:21:23.379 "trsvcid": "4420" 00:21:23.379 }, 00:21:23.379 "peer_address": { 00:21:23.379 "trtype": "TCP", 00:21:23.379 "adrfam": "IPv4", 00:21:23.379 "traddr": "10.0.0.1", 00:21:23.379 "trsvcid": "38372" 00:21:23.379 }, 00:21:23.379 "auth": { 00:21:23.379 "state": "completed", 00:21:23.379 "digest": "sha512", 00:21:23.379 "dhgroup": "ffdhe8192" 00:21:23.379 } 00:21:23.379 } 00:21:23.379 ]' 00:21:23.379 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.379 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.379 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.379 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.379 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.379 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.379 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.379 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.637 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:21:24.569 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.569 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.569 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.569 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.569 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.569 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:24.569 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:24.569 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:24.569 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:24.569 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:24.569 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:24.828 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:24.828 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.828 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.828 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:24.828 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:24.828 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.828 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.828 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.828 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.828 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.828 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.828 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.762 00:21:25.762 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:25.762 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:25.762 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.019 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.019 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.019 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.019 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.019 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.020 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.020 { 00:21:26.020 "cntlid": 145, 00:21:26.020 "qid": 0, 00:21:26.020 "state": "enabled", 00:21:26.020 "thread": "nvmf_tgt_poll_group_000", 00:21:26.020 "listen_address": { 00:21:26.020 "trtype": "TCP", 00:21:26.020 "adrfam": "IPv4", 00:21:26.020 "traddr": "10.0.0.2", 00:21:26.020 "trsvcid": "4420" 00:21:26.020 }, 00:21:26.020 "peer_address": { 00:21:26.020 "trtype": "TCP", 00:21:26.020 "adrfam": "IPv4", 00:21:26.020 "traddr": "10.0.0.1", 00:21:26.020 "trsvcid": "38390" 00:21:26.020 }, 00:21:26.020 "auth": { 00:21:26.020 "state": "completed", 00:21:26.020 "digest": "sha512", 00:21:26.020 "dhgroup": "ffdhe8192" 00:21:26.020 } 00:21:26.020 } 00:21:26.020 ]' 00:21:26.020 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.020 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.020 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.020 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:26.020 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.277 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.277 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.277 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.535 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YTgyOGNlOGNiMDUyZmQ0YTBkZDVmZGI2NjVhZmM4NjY1Nzk2ZjI4ZDU5NjA5ZDA1ys+zBQ==: --dhchap-ctrl-secret DHHC-1:03:ZDU0ZjFiNWUyMjhiZWZhNTdkNjYyMDFjZWNiMDJlMzBkZjkyOTE4ZGEyNWI0NTI0MGM0YWZhMDE4ZTdhZjYyMRNfQS4=: 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:27.470 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:28.404 request: 00:21:28.404 { 00:21:28.404 "name": "nvme0", 00:21:28.404 "trtype": "tcp", 00:21:28.404 "traddr": "10.0.0.2", 00:21:28.404 "adrfam": "ipv4", 00:21:28.404 "trsvcid": "4420", 00:21:28.404 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:28.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:28.404 "prchk_reftag": false, 00:21:28.404 "prchk_guard": false, 00:21:28.404 "hdgst": false, 00:21:28.404 "ddgst": false, 00:21:28.404 "dhchap_key": "key2", 00:21:28.404 "method": "bdev_nvme_attach_controller", 00:21:28.404 "req_id": 1 00:21:28.404 } 00:21:28.404 Got JSON-RPC error response 00:21:28.404 response: 00:21:28.404 { 00:21:28.404 "code": -5, 00:21:28.404 "message": "Input/output error" 00:21:28.404 } 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:28.404 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.405 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.972 request: 00:21:28.972 { 00:21:28.972 "name": "nvme0", 00:21:28.972 "trtype": "tcp", 00:21:28.972 "traddr": "10.0.0.2", 00:21:28.972 "adrfam": "ipv4", 00:21:28.972 "trsvcid": "4420", 00:21:28.972 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:28.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:28.972 "prchk_reftag": false, 00:21:28.972 "prchk_guard": false, 00:21:28.972 "hdgst": false, 00:21:28.972 "ddgst": false, 00:21:28.972 "dhchap_key": "key1", 00:21:28.972 "dhchap_ctrlr_key": "ckey2", 00:21:28.972 "method": "bdev_nvme_attach_controller", 00:21:28.972 "req_id": 1 00:21:28.972 } 00:21:28.972 Got JSON-RPC error response 00:21:28.972 response: 00:21:28.972 { 00:21:28.972 "code": -5, 00:21:28.972 "message": "Input/output error" 00:21:28.972 } 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.972 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.910 request: 00:21:29.910 { 00:21:29.910 "name": "nvme0", 00:21:29.910 "trtype": "tcp", 00:21:29.910 "traddr": "10.0.0.2", 00:21:29.910 "adrfam": "ipv4", 00:21:29.910 "trsvcid": "4420", 00:21:29.910 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:29.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.910 "prchk_reftag": false, 00:21:29.910 "prchk_guard": false, 00:21:29.910 "hdgst": false, 00:21:29.910 "ddgst": false, 00:21:29.910 "dhchap_key": "key1", 00:21:29.910 "dhchap_ctrlr_key": "ckey1", 00:21:29.910 "method": "bdev_nvme_attach_controller", 00:21:29.910 "req_id": 1 00:21:29.910 } 00:21:29.910 Got JSON-RPC error response 00:21:29.910 response: 00:21:29.910 { 00:21:29.910 "code": -5, 00:21:29.910 "message": "Input/output error" 00:21:29.910 } 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2272055 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2272055 ']' 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2272055 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2272055 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2272055' 00:21:29.910 killing process with pid 2272055 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2272055 00:21:29.910 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2272055 00:21:30.169 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:30.169 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:30.169 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:30.169 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.169 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2294568 00:21:30.169 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:30.169 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2294568 00:21:30.169 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2294568 ']' 00:21:30.169 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.169 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:30.169 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.169 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:30.169 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.427 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:30.427 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:30.427 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:30.427 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:30.427 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.427 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.427 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:30.427 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2294568 00:21:30.427 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2294568 ']' 00:21:30.427 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.427 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:30.427 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.427 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:30.427 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.685 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:30.685 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:30.685 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:30.685 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.685 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.943 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.943 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:30.943 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.943 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.943 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:30.943 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.943 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.943 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:30.943 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.943 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.943 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.943 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.943 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.879 00:21:31.879 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.879 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.879 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.137 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.137 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.137 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.137 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.137 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.137 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.137 { 00:21:32.137 "cntlid": 1, 00:21:32.137 "qid": 0, 00:21:32.137 "state": "enabled", 00:21:32.137 "thread": "nvmf_tgt_poll_group_000", 00:21:32.137 "listen_address": { 00:21:32.137 "trtype": "TCP", 00:21:32.137 "adrfam": "IPv4", 00:21:32.137 "traddr": "10.0.0.2", 00:21:32.137 "trsvcid": "4420" 00:21:32.137 }, 00:21:32.137 "peer_address": { 00:21:32.137 "trtype": "TCP", 00:21:32.137 "adrfam": "IPv4", 00:21:32.137 "traddr": "10.0.0.1", 00:21:32.137 "trsvcid": "49012" 00:21:32.137 }, 00:21:32.137 "auth": { 00:21:32.137 "state": "completed", 00:21:32.137 "digest": "sha512", 00:21:32.137 "dhgroup": "ffdhe8192" 00:21:32.137 } 00:21:32.137 } 00:21:32.137 ]' 00:21:32.137 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.137 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.137 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.137 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.137 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.137 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.137 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.137 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.395 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NmNhN2I0MWEzY2Q1MGZlY2I2OGQxZGM0NGZkYmQ2ZGZjNGE5MzI1NDdjYjg0YjI3NWMxNTg4MjJjNjg2ZWI2OIOaRQI=: 00:21:33.332 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.332 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.332 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.332 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.332 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.333 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:33.333 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.333 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.333 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.333 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:33.333 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:33.590 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.590 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:33.591 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.591 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:33.591 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:33.591 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:33.591 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:33.591 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.591 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.849 request: 00:21:33.849 { 00:21:33.849 "name": "nvme0", 00:21:33.849 "trtype": "tcp", 00:21:33.849 "traddr": "10.0.0.2", 00:21:33.849 "adrfam": "ipv4", 00:21:33.849 "trsvcid": "4420", 00:21:33.849 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:33.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:33.849 "prchk_reftag": false, 00:21:33.849 "prchk_guard": false, 00:21:33.849 "hdgst": false, 00:21:33.849 "ddgst": false, 00:21:33.849 "dhchap_key": "key3", 00:21:33.849 "method": "bdev_nvme_attach_controller", 00:21:33.849 "req_id": 1 00:21:33.849 } 00:21:33.849 Got JSON-RPC error response 00:21:33.849 response: 00:21:33.849 { 00:21:33.849 "code": -5, 00:21:33.849 "message": "Input/output error" 00:21:33.849 } 00:21:33.849 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:33.849 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:33.849 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:33.849 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:33.849 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:33.849 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:33.849 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:33.849 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:34.107 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.107 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:34.107 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.107 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:34.107 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:34.107 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:34.107 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:34.107 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.107 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.365 request: 00:21:34.365 { 00:21:34.365 "name": "nvme0", 00:21:34.365 "trtype": "tcp", 00:21:34.365 "traddr": "10.0.0.2", 00:21:34.365 "adrfam": "ipv4", 00:21:34.365 "trsvcid": "4420", 00:21:34.365 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:34.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:34.365 "prchk_reftag": false, 00:21:34.365 "prchk_guard": false, 00:21:34.365 "hdgst": false, 00:21:34.365 "ddgst": false, 00:21:34.365 "dhchap_key": "key3", 00:21:34.365 "method": "bdev_nvme_attach_controller", 00:21:34.365 "req_id": 1 00:21:34.365 } 00:21:34.365 Got JSON-RPC error response 00:21:34.365 response: 00:21:34.365 { 00:21:34.365 "code": -5, 00:21:34.365 "message": "Input/output error" 00:21:34.365 } 00:21:34.365 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:34.365 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:34.365 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:34.365 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:34.365 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:34.365 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:34.365 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:34.365 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.365 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.365 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:34.623 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:34.881 request: 00:21:34.881 { 00:21:34.881 "name": "nvme0", 00:21:34.881 "trtype": "tcp", 00:21:34.881 "traddr": "10.0.0.2", 00:21:34.881 "adrfam": "ipv4", 00:21:34.881 "trsvcid": "4420", 00:21:34.881 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:34.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:34.881 "prchk_reftag": false, 00:21:34.881 "prchk_guard": false, 00:21:34.881 "hdgst": false, 00:21:34.881 "ddgst": false, 00:21:34.881 "dhchap_key": "key0", 00:21:34.881 "dhchap_ctrlr_key": "key1", 00:21:34.881 "method": "bdev_nvme_attach_controller", 00:21:34.881 "req_id": 1 00:21:34.881 } 00:21:34.881 Got JSON-RPC error response 00:21:34.881 response: 00:21:34.881 { 00:21:34.881 "code": -5, 00:21:34.881 "message": "Input/output error" 00:21:34.881 } 00:21:34.881 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:34.881 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:34.881 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:34.881 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:34.881 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:34.881 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:35.139 00:21:35.139 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:35.139 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:35.139 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.397 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.397 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.397 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.657 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:35.657 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:35.657 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2272083 00:21:35.657 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2272083 ']' 00:21:35.657 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2272083 00:21:35.657 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:35.657 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:35.657 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2272083 00:21:35.657 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:35.657 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:35.657 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2272083' 00:21:35.657 killing process with pid 2272083 00:21:35.657 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2272083 00:21:35.657 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2272083 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:36.226 rmmod nvme_tcp 00:21:36.226 rmmod nvme_fabrics 00:21:36.226 rmmod nvme_keyring 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2294568 ']' 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2294568 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2294568 ']' 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2294568 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2294568 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2294568' 00:21:36.226 killing process with pid 2294568 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2294568 00:21:36.226 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2294568 00:21:36.484 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:36.484 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:36.484 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:36.484 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:36.484 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:36.484 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.484 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.484 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.413 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:38.413 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.8vM /tmp/spdk.key-sha256.tRQ /tmp/spdk.key-sha384.AfB /tmp/spdk.key-sha512.h6K /tmp/spdk.key-sha512.ie0 /tmp/spdk.key-sha384.Y6q /tmp/spdk.key-sha256.G9H '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:38.671 00:21:38.671 real 3m7.992s 00:21:38.671 user 7m17.579s 00:21:38.671 sys 0m24.811s 00:21:38.671 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:38.671 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.671 ************************************ 00:21:38.671 END TEST nvmf_auth_target 00:21:38.671 ************************************ 00:21:38.671 01:57:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:38.672 ************************************ 00:21:38.672 START TEST nvmf_bdevio_no_huge 00:21:38.672 ************************************ 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:38.672 * Looking for test storage... 00:21:38.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:38.672 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:40.576 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:40.576 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:40.576 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:40.576 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:40.577 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:40.577 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:40.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:21:40.835 00:21:40.835 --- 10.0.0.2 ping statistics --- 00:21:40.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.835 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:21:40.835 00:21:40.835 --- 10.0.0.1 ping statistics --- 00:21:40.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.835 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2297210 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:40.835 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2297210 00:21:40.836 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2297210 ']' 00:21:40.836 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.836 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.836 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.836 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.836 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:40.836 [2024-07-26 01:57:22.712944] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:21:40.836 [2024-07-26 01:57:22.713031] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:40.836 [2024-07-26 01:57:22.792139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.094 [2024-07-26 01:57:22.884449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.094 [2024-07-26 01:57:22.884503] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.094 [2024-07-26 01:57:22.884520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.094 [2024-07-26 01:57:22.884533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.094 [2024-07-26 01:57:22.884546] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.094 [2024-07-26 01:57:22.884628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:41.094 [2024-07-26 01:57:22.884681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:41.094 [2024-07-26 01:57:22.884741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:41.094 [2024-07-26 01:57:22.884744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.094 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.094 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:21:41.094 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.094 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:41.094 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.094 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.094 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:41.094 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.094 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.094 [2024-07-26 01:57:22.998788] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.094 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.094 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:41.094 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.094 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.094 Malloc0 00:21:41.094 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.094 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.095 [2024-07-26 01:57:23.036469] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:41.095 { 00:21:41.095 "params": { 00:21:41.095 "name": "Nvme$subsystem", 00:21:41.095 "trtype": "$TEST_TRANSPORT", 00:21:41.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.095 "adrfam": "ipv4", 00:21:41.095 "trsvcid": "$NVMF_PORT", 00:21:41.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.095 "hdgst": ${hdgst:-false}, 00:21:41.095 "ddgst": ${ddgst:-false} 00:21:41.095 }, 00:21:41.095 "method": "bdev_nvme_attach_controller" 00:21:41.095 } 00:21:41.095 EOF 00:21:41.095 )") 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:41.095 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:41.095 "params": { 00:21:41.095 "name": "Nvme1", 00:21:41.095 "trtype": "tcp", 00:21:41.095 "traddr": "10.0.0.2", 00:21:41.095 "adrfam": "ipv4", 00:21:41.095 "trsvcid": "4420", 00:21:41.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.095 "hdgst": false, 00:21:41.095 "ddgst": false 00:21:41.095 }, 00:21:41.095 "method": "bdev_nvme_attach_controller" 00:21:41.095 }' 00:21:41.095 [2024-07-26 01:57:23.080347] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:21:41.095 [2024-07-26 01:57:23.080503] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2297361 ] 00:21:41.353 [2024-07-26 01:57:23.145680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:41.353 [2024-07-26 01:57:23.228824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.353 [2024-07-26 01:57:23.228873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.353 [2024-07-26 01:57:23.228876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.611 I/O targets: 00:21:41.611 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:41.611 00:21:41.611 00:21:41.611 CUnit - A unit testing framework for C - Version 2.1-3 00:21:41.611 http://cunit.sourceforge.net/ 00:21:41.611 00:21:41.611 00:21:41.611 Suite: bdevio tests on: Nvme1n1 00:21:41.611 Test: blockdev write read block ...passed 00:21:41.611 Test: blockdev write zeroes read block ...passed 00:21:41.611 Test: blockdev write zeroes read no split ...passed 00:21:41.611 Test: blockdev write zeroes read split ...passed 00:21:41.611 Test: blockdev write zeroes read split partial ...passed 00:21:41.611 Test: blockdev reset ...[2024-07-26 01:57:23.552170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:41.611 [2024-07-26 01:57:23.552295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244cb00 (9): Bad file descriptor 00:21:41.611 [2024-07-26 01:57:23.570883] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:41.611 passed 00:21:41.611 Test: blockdev write read 8 blocks ...passed 00:21:41.611 Test: blockdev write read size > 128k ...passed 00:21:41.611 Test: blockdev write read invalid size ...passed 00:21:41.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:41.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:41.611 Test: blockdev write read max offset ...passed 00:21:41.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:41.870 Test: blockdev writev readv 8 blocks ...passed 00:21:41.870 Test: blockdev writev readv 30 x 1block ...passed 00:21:41.870 Test: blockdev writev readv block ...passed 00:21:41.870 Test: blockdev writev readv size > 128k ...passed 00:21:41.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:41.870 Test: blockdev comparev and writev ...[2024-07-26 01:57:23.828679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:41.870 [2024-07-26 01:57:23.828715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.870 [2024-07-26 01:57:23.828739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:41.870 [2024-07-26 01:57:23.828756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:41.870 [2024-07-26 01:57:23.829084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:41.870 [2024-07-26 01:57:23.829109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:41.870 [2024-07-26 01:57:23.829137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:41.870 [2024-07-26 01:57:23.829153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:41.870 [2024-07-26 01:57:23.829473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:41.870 [2024-07-26 01:57:23.829497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:41.870 [2024-07-26 01:57:23.829518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:41.870 [2024-07-26 01:57:23.829533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:41.870 [2024-07-26 01:57:23.829860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:41.870 [2024-07-26 01:57:23.829883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:41.870 [2024-07-26 01:57:23.829904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:41.870 [2024-07-26 01:57:23.829919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:41.870 passed 00:21:42.129 Test: blockdev nvme passthru rw ...passed 00:21:42.129 Test: blockdev nvme passthru vendor specific ...[2024-07-26 01:57:23.913392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:42.129 [2024-07-26 01:57:23.913419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:42.129 [2024-07-26 01:57:23.913597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:42.129 [2024-07-26 01:57:23.913620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:42.129 [2024-07-26 01:57:23.913783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:42.129 [2024-07-26 01:57:23.913805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:42.129 [2024-07-26 01:57:23.913968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:42.129 [2024-07-26 01:57:23.913991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:42.129 passed 00:21:42.129 Test: blockdev nvme admin passthru ...passed 00:21:42.129 Test: blockdev copy ...passed 00:21:42.129 00:21:42.129 Run Summary: Type Total Ran Passed Failed Inactive 00:21:42.129 suites 1 1 n/a 0 0 00:21:42.129 tests 23 23 23 0 0 00:21:42.129 asserts 152 152 152 0 n/a 00:21:42.129 00:21:42.129 Elapsed time = 1.182 seconds 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.388 rmmod nvme_tcp 00:21:42.388 rmmod nvme_fabrics 00:21:42.388 rmmod nvme_keyring 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2297210 ']' 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2297210 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2297210 ']' 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2297210 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2297210 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2297210' 00:21:42.388 killing process with pid 2297210 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2297210 00:21:42.388 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2297210 00:21:42.954 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:42.954 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:42.954 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:42.954 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:42.954 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:42.954 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.954 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.954 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.855 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:44.855 00:21:44.855 real 0m6.313s 00:21:44.855 user 0m9.806s 00:21:44.855 sys 0m2.469s 00:21:44.855 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:44.855 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:44.855 ************************************ 00:21:44.855 END TEST nvmf_bdevio_no_huge 00:21:44.855 ************************************ 00:21:44.855 01:57:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:44.855 01:57:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:44.855 01:57:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:44.855 01:57:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:44.855 ************************************ 00:21:44.855 START TEST nvmf_tls 00:21:44.855 ************************************ 00:21:44.855 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:45.114 * Looking for test storage... 00:21:45.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.114 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:45.115 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.014 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.014 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:47.014 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:47.014 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:47.014 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:47.014 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:47.014 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:47.014 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:47.014 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:47.014 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:47.014 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:47.014 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:47.014 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:47.015 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:47.015 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:47.015 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:47.015 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:47.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:21:47.015 00:21:47.015 --- 10.0.0.2 ping statistics --- 00:21:47.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.015 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:21:47.015 00:21:47.015 --- 10.0.0.1 ping statistics --- 00:21:47.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.015 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2299439 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2299439 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2299439 ']' 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.015 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.016 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.016 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.016 [2024-07-26 01:57:28.975410] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:21:47.016 [2024-07-26 01:57:28.975501] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.016 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.273 [2024-07-26 01:57:29.046756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.273 [2024-07-26 01:57:29.135850] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.273 [2024-07-26 01:57:29.135911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.273 [2024-07-26 01:57:29.135928] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.273 [2024-07-26 01:57:29.135942] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.273 [2024-07-26 01:57:29.135953] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.273 [2024-07-26 01:57:29.135991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.273 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:47.273 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:47.273 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:47.273 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:47.273 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.273 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.273 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:47.273 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:47.530 true 00:21:47.530 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:47.530 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:47.787 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:47.787 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:47.787 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:48.045 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:48.045 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:48.302 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:48.302 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:48.302 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:48.560 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:48.560 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:48.818 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:48.818 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:48.818 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:48.818 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:49.076 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:49.076 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:49.076 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:49.336 01:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:49.336 01:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:49.596 01:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:49.596 01:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:49.596 01:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:49.855 01:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:49.855 01:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:50.114 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:50.373 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:50.373 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:50.373 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.QmLtVZwaRO 00:21:50.373 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:50.373 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.KZAnu2YuA8 00:21:50.373 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:50.373 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:50.373 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.QmLtVZwaRO 00:21:50.373 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.KZAnu2YuA8 00:21:50.373 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:50.631 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:50.889 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.QmLtVZwaRO 00:21:50.889 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QmLtVZwaRO 00:21:50.889 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:51.147 [2024-07-26 01:57:33.003694] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.147 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:51.404 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:51.660 [2024-07-26 01:57:33.501021] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:51.660 [2024-07-26 01:57:33.501270] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.660 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:51.917 malloc0 00:21:51.917 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:52.174 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QmLtVZwaRO 00:21:52.432 [2024-07-26 01:57:34.230178] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:52.432 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.QmLtVZwaRO 00:21:52.432 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.430 Initializing NVMe Controllers 00:22:02.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:02.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:02.430 Initialization complete. Launching workers. 00:22:02.430 ======================================================== 00:22:02.430 Latency(us) 00:22:02.430 Device Information : IOPS MiB/s Average min max 00:22:02.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7680.33 30.00 8335.70 1239.97 9872.36 00:22:02.430 ======================================================== 00:22:02.430 Total : 7680.33 30.00 8335.70 1239.97 9872.36 00:22:02.430 00:22:02.430 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QmLtVZwaRO 00:22:02.430 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.430 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:02.430 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:02.430 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QmLtVZwaRO' 00:22:02.430 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.430 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2301217 00:22:02.430 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.430 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.430 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2301217 /var/tmp/bdevperf.sock 00:22:02.430 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2301217 ']' 00:22:02.430 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.430 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.430 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.430 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.430 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.430 [2024-07-26 01:57:44.406772] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:02.430 [2024-07-26 01:57:44.406858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2301217 ] 00:22:02.430 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.688 [2024-07-26 01:57:44.469300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.688 [2024-07-26 01:57:44.555992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.688 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.688 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:02.688 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QmLtVZwaRO 00:22:02.944 [2024-07-26 01:57:44.886668] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.944 [2024-07-26 01:57:44.886792] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:03.201 TLSTESTn1 00:22:03.201 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:03.201 Running I/O for 10 seconds... 00:22:13.161 00:22:13.161 Latency(us) 00:22:13.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.161 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:13.161 Verification LBA range: start 0x0 length 0x2000 00:22:13.161 TLSTESTn1 : 10.03 3494.90 13.65 0.00 0.00 36547.11 6165.24 43108.12 00:22:13.161 =================================================================================================================== 00:22:13.161 Total : 3494.90 13.65 0.00 0.00 36547.11 6165.24 43108.12 00:22:13.161 0 00:22:13.161 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:13.161 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2301217 00:22:13.161 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2301217 ']' 00:22:13.161 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2301217 00:22:13.161 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:13.161 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:13.161 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2301217 00:22:13.161 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:13.161 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2301217' 00:22:13.419 killing process with pid 2301217 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2301217 00:22:13.419 Received shutdown signal, test time was about 10.000000 seconds 00:22:13.419 00:22:13.419 Latency(us) 00:22:13.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.419 =================================================================================================================== 00:22:13.419 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:13.419 [2024-07-26 01:57:55.173178] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2301217 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KZAnu2YuA8 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KZAnu2YuA8 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KZAnu2YuA8 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.KZAnu2YuA8' 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2302531 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2302531 /var/tmp/bdevperf.sock 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2302531 ']' 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:13.419 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:13.677 [2024-07-26 01:57:55.438527] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:13.677 [2024-07-26 01:57:55.438606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2302531 ] 00:22:13.677 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.677 [2024-07-26 01:57:55.498189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.677 [2024-07-26 01:57:55.583321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.934 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:13.934 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:13.934 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KZAnu2YuA8 00:22:14.192 [2024-07-26 01:57:55.964987] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.192 [2024-07-26 01:57:55.965141] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:14.192 [2024-07-26 01:57:55.975637] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:14.192 [2024-07-26 01:57:55.976199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1303bb0 (107): Transport endpoint is not connected 00:22:14.192 [2024-07-26 01:57:55.977186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1303bb0 (9): Bad file descriptor 00:22:14.192 [2024-07-26 01:57:55.978187] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:14.192 [2024-07-26 01:57:55.978207] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:14.192 [2024-07-26 01:57:55.978225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:14.192 request: 00:22:14.192 { 00:22:14.192 "name": "TLSTEST", 00:22:14.192 "trtype": "tcp", 00:22:14.192 "traddr": "10.0.0.2", 00:22:14.192 "adrfam": "ipv4", 00:22:14.192 "trsvcid": "4420", 00:22:14.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:14.192 "prchk_reftag": false, 00:22:14.192 "prchk_guard": false, 00:22:14.192 "hdgst": false, 00:22:14.192 "ddgst": false, 00:22:14.192 "psk": "/tmp/tmp.KZAnu2YuA8", 00:22:14.192 "method": "bdev_nvme_attach_controller", 00:22:14.192 "req_id": 1 00:22:14.192 } 00:22:14.192 Got JSON-RPC error response 00:22:14.192 response: 00:22:14.192 { 00:22:14.192 "code": -5, 00:22:14.192 "message": "Input/output error" 00:22:14.192 } 00:22:14.192 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2302531 00:22:14.192 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2302531 ']' 00:22:14.192 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2302531 00:22:14.192 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:14.192 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.192 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2302531 00:22:14.192 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:14.192 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:14.192 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2302531' 00:22:14.192 killing process with pid 2302531 00:22:14.192 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2302531 00:22:14.192 Received shutdown signal, test time was about 10.000000 seconds 00:22:14.192 00:22:14.192 Latency(us) 00:22:14.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.192 =================================================================================================================== 00:22:14.192 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:14.192 [2024-07-26 01:57:56.029459] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:14.192 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2302531 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QmLtVZwaRO 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QmLtVZwaRO 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QmLtVZwaRO 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QmLtVZwaRO' 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2302657 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2302657 /var/tmp/bdevperf.sock 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2302657 ']' 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:14.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:14.451 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.451 [2024-07-26 01:57:56.295709] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:14.451 [2024-07-26 01:57:56.295780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2302657 ] 00:22:14.451 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.451 [2024-07-26 01:57:56.355530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.451 [2024-07-26 01:57:56.437440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.710 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:14.710 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:14.710 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.QmLtVZwaRO 00:22:14.969 [2024-07-26 01:57:56.759598] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.969 [2024-07-26 01:57:56.759731] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:14.969 [2024-07-26 01:57:56.767520] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:14.969 [2024-07-26 01:57:56.767553] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:14.969 [2024-07-26 01:57:56.767607] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:14.969 [2024-07-26 01:57:56.768596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4bb0 (107): Transport endpoint is not connected 00:22:14.969 [2024-07-26 01:57:56.769587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4bb0 (9): Bad file descriptor 00:22:14.969 [2024-07-26 01:57:56.770586] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:14.969 [2024-07-26 01:57:56.770606] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:14.969 [2024-07-26 01:57:56.770638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:14.969 request: 00:22:14.969 { 00:22:14.969 "name": "TLSTEST", 00:22:14.969 "trtype": "tcp", 00:22:14.969 "traddr": "10.0.0.2", 00:22:14.969 "adrfam": "ipv4", 00:22:14.969 "trsvcid": "4420", 00:22:14.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.969 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:14.969 "prchk_reftag": false, 00:22:14.969 "prchk_guard": false, 00:22:14.969 "hdgst": false, 00:22:14.969 "ddgst": false, 00:22:14.969 "psk": "/tmp/tmp.QmLtVZwaRO", 00:22:14.969 "method": "bdev_nvme_attach_controller", 00:22:14.969 "req_id": 1 00:22:14.969 } 00:22:14.969 Got JSON-RPC error response 00:22:14.969 response: 00:22:14.969 { 00:22:14.969 "code": -5, 00:22:14.969 "message": "Input/output error" 00:22:14.969 } 00:22:14.969 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2302657 00:22:14.969 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2302657 ']' 00:22:14.969 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2302657 00:22:14.969 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:14.969 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.969 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2302657 00:22:14.969 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:14.969 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:14.969 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2302657' 00:22:14.969 killing process with pid 2302657 00:22:14.969 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2302657 00:22:14.969 Received shutdown signal, test time was about 10.000000 seconds 00:22:14.969 00:22:14.969 Latency(us) 00:22:14.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.969 =================================================================================================================== 00:22:14.969 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:14.969 [2024-07-26 01:57:56.818688] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:14.969 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2302657 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QmLtVZwaRO 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QmLtVZwaRO 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QmLtVZwaRO 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QmLtVZwaRO' 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2302775 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2302775 /var/tmp/bdevperf.sock 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2302775 ']' 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:15.228 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.228 [2024-07-26 01:57:57.076645] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:15.228 [2024-07-26 01:57:57.076734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2302775 ] 00:22:15.228 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.228 [2024-07-26 01:57:57.143904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.228 [2024-07-26 01:57:57.236712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.487 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:15.487 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:15.487 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QmLtVZwaRO 00:22:15.745 [2024-07-26 01:57:57.618306] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:15.745 [2024-07-26 01:57:57.618447] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:15.745 [2024-07-26 01:57:57.629789] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:15.745 [2024-07-26 01:57:57.629820] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:15.745 [2024-07-26 01:57:57.629873] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:15.745 [2024-07-26 01:57:57.630358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301bb0 (107): Transport endpoint is not connected 00:22:15.745 [2024-07-26 01:57:57.631360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301bb0 (9): Bad file descriptor 00:22:15.745 [2024-07-26 01:57:57.632361] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:15.745 [2024-07-26 01:57:57.632380] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:15.745 [2024-07-26 01:57:57.632397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:15.745 request: 00:22:15.745 { 00:22:15.745 "name": "TLSTEST", 00:22:15.745 "trtype": "tcp", 00:22:15.745 "traddr": "10.0.0.2", 00:22:15.745 "adrfam": "ipv4", 00:22:15.745 "trsvcid": "4420", 00:22:15.745 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:15.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.745 "prchk_reftag": false, 00:22:15.745 "prchk_guard": false, 00:22:15.745 "hdgst": false, 00:22:15.745 "ddgst": false, 00:22:15.745 "psk": "/tmp/tmp.QmLtVZwaRO", 00:22:15.745 "method": "bdev_nvme_attach_controller", 00:22:15.745 "req_id": 1 00:22:15.745 } 00:22:15.745 Got JSON-RPC error response 00:22:15.745 response: 00:22:15.745 { 00:22:15.745 "code": -5, 00:22:15.745 "message": "Input/output error" 00:22:15.745 } 00:22:15.745 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2302775 00:22:15.745 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2302775 ']' 00:22:15.745 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2302775 00:22:15.745 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:15.745 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:15.745 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2302775 00:22:15.745 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:15.745 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:15.745 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2302775' 00:22:15.745 killing process with pid 2302775 00:22:15.745 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2302775 00:22:15.745 Received shutdown signal, test time was about 10.000000 seconds 00:22:15.745 00:22:15.745 Latency(us) 00:22:15.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.746 =================================================================================================================== 00:22:15.746 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:15.746 [2024-07-26 01:57:57.676386] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:15.746 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2302775 00:22:16.003 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:16.003 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:16.003 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:16.003 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:16.003 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:16.003 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:16.003 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2302820 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2302820 /var/tmp/bdevperf.sock 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2302820 ']' 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.004 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.004 [2024-07-26 01:57:57.942605] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:16.004 [2024-07-26 01:57:57.942680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2302820 ] 00:22:16.004 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.004 [2024-07-26 01:57:57.999763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.263 [2024-07-26 01:57:58.083237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.263 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:16.263 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:16.263 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:16.521 [2024-07-26 01:57:58.472027] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:16.521 [2024-07-26 01:57:58.473848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107a160 (9): Bad file descriptor 00:22:16.521 [2024-07-26 01:57:58.474844] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:16.521 [2024-07-26 01:57:58.474866] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:16.521 [2024-07-26 01:57:58.474901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:16.521 request: 00:22:16.521 { 00:22:16.521 "name": "TLSTEST", 00:22:16.521 "trtype": "tcp", 00:22:16.521 "traddr": "10.0.0.2", 00:22:16.521 "adrfam": "ipv4", 00:22:16.521 "trsvcid": "4420", 00:22:16.521 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.521 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:16.521 "prchk_reftag": false, 00:22:16.521 "prchk_guard": false, 00:22:16.521 "hdgst": false, 00:22:16.521 "ddgst": false, 00:22:16.521 "method": "bdev_nvme_attach_controller", 00:22:16.521 "req_id": 1 00:22:16.521 } 00:22:16.521 Got JSON-RPC error response 00:22:16.521 response: 00:22:16.521 { 00:22:16.521 "code": -5, 00:22:16.521 "message": "Input/output error" 00:22:16.521 } 00:22:16.521 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2302820 00:22:16.521 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2302820 ']' 00:22:16.521 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2302820 00:22:16.521 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:16.521 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.521 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2302820 00:22:16.521 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:16.521 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:16.521 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2302820' 00:22:16.521 killing process with pid 2302820 00:22:16.521 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2302820 00:22:16.521 Received shutdown signal, test time was about 10.000000 seconds 00:22:16.521 00:22:16.521 Latency(us) 00:22:16.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.521 =================================================================================================================== 00:22:16.521 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:16.521 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2302820 00:22:16.780 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:16.780 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:16.780 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:16.780 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:16.780 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:16.780 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 2299439 00:22:16.780 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2299439 ']' 00:22:16.780 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2299439 00:22:16.780 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:16.780 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.780 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2299439 00:22:16.780 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:16.780 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:16.780 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2299439' 00:22:16.780 killing process with pid 2299439 00:22:16.780 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2299439 00:22:16.780 [2024-07-26 01:57:58.771220] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:16.780 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2299439 00:22:17.039 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:17.039 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:17.039 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:17.039 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:17.039 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:17.039 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:17.039 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.sj8rja32Au 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.sj8rja32Au 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2302969 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2302969 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2302969 ']' 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:17.297 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.297 [2024-07-26 01:57:59.116475] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:17.297 [2024-07-26 01:57:59.116551] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.297 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.297 [2024-07-26 01:57:59.179117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.297 [2024-07-26 01:57:59.266484] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.297 [2024-07-26 01:57:59.266550] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.297 [2024-07-26 01:57:59.266578] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.297 [2024-07-26 01:57:59.266589] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.297 [2024-07-26 01:57:59.266599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.297 [2024-07-26 01:57:59.266626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.556 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.556 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:17.556 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.556 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:17.556 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.556 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.556 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.sj8rja32Au 00:22:17.556 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sj8rja32Au 00:22:17.556 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:17.814 [2024-07-26 01:57:59.676840] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.814 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:18.072 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:18.330 [2024-07-26 01:58:00.246360] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:18.330 [2024-07-26 01:58:00.246619] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.330 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:18.588 malloc0 00:22:18.588 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:18.846 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sj8rja32Au 00:22:19.105 [2024-07-26 01:58:01.044424] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:19.105 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sj8rja32Au 00:22:19.105 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:19.105 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:19.105 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:19.105 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sj8rja32Au' 00:22:19.105 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:19.105 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2303251 00:22:19.105 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:19.105 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:19.105 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2303251 /var/tmp/bdevperf.sock 00:22:19.105 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2303251 ']' 00:22:19.105 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.105 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.105 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.105 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.105 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.105 [2024-07-26 01:58:01.108336] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:19.105 [2024-07-26 01:58:01.108464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2303251 ] 00:22:19.364 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.364 [2024-07-26 01:58:01.169913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.364 [2024-07-26 01:58:01.254930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.364 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:19.364 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:19.364 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sj8rja32Au 00:22:19.622 [2024-07-26 01:58:01.602454] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:19.622 [2024-07-26 01:58:01.602570] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:19.879 TLSTESTn1 00:22:19.879 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:19.879 Running I/O for 10 seconds... 00:22:32.074 00:22:32.074 Latency(us) 00:22:32.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.074 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:32.074 Verification LBA range: start 0x0 length 0x2000 00:22:32.074 TLSTESTn1 : 10.04 3432.04 13.41 0.00 0.00 37202.81 9806.13 37476.88 00:22:32.074 =================================================================================================================== 00:22:32.074 Total : 3432.04 13.41 0.00 0.00 37202.81 9806.13 37476.88 00:22:32.074 0 00:22:32.074 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:32.074 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2303251 00:22:32.074 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2303251 ']' 00:22:32.074 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2303251 00:22:32.074 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:32.074 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:32.074 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2303251 00:22:32.074 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:32.074 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:32.074 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2303251' 00:22:32.074 killing process with pid 2303251 00:22:32.074 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2303251 00:22:32.074 Received shutdown signal, test time was about 10.000000 seconds 00:22:32.074 00:22:32.074 Latency(us) 00:22:32.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.074 =================================================================================================================== 00:22:32.074 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.074 [2024-07-26 01:58:11.923294] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:32.074 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2303251 00:22:32.074 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.sj8rja32Au 00:22:32.074 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sj8rja32Au 00:22:32.074 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sj8rja32Au 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sj8rja32Au 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sj8rja32Au' 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2304558 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2304558 /var/tmp/bdevperf.sock 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2304558 ']' 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.075 [2024-07-26 01:58:12.197474] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:32.075 [2024-07-26 01:58:12.197564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304558 ] 00:22:32.075 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.075 [2024-07-26 01:58:12.255812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.075 [2024-07-26 01:58:12.335903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sj8rja32Au 00:22:32.075 [2024-07-26 01:58:12.702387] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.075 [2024-07-26 01:58:12.702463] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:32.075 [2024-07-26 01:58:12.702477] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.sj8rja32Au 00:22:32.075 request: 00:22:32.075 { 00:22:32.075 "name": "TLSTEST", 00:22:32.075 "trtype": "tcp", 00:22:32.075 "traddr": "10.0.0.2", 00:22:32.075 "adrfam": "ipv4", 00:22:32.075 "trsvcid": "4420", 00:22:32.075 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.075 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.075 "prchk_reftag": false, 00:22:32.075 "prchk_guard": false, 00:22:32.075 "hdgst": false, 00:22:32.075 "ddgst": false, 00:22:32.075 "psk": "/tmp/tmp.sj8rja32Au", 00:22:32.075 "method": "bdev_nvme_attach_controller", 00:22:32.075 "req_id": 1 00:22:32.075 } 00:22:32.075 Got JSON-RPC error response 00:22:32.075 response: 00:22:32.075 { 00:22:32.075 "code": -1, 00:22:32.075 "message": "Operation not permitted" 00:22:32.075 } 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2304558 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2304558 ']' 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2304558 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2304558 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2304558' 00:22:32.075 killing process with pid 2304558 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2304558 00:22:32.075 Received shutdown signal, test time was about 10.000000 seconds 00:22:32.075 00:22:32.075 Latency(us) 00:22:32.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.075 =================================================================================================================== 00:22:32.075 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2304558 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 2302969 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2302969 ']' 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2302969 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2302969 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2302969' 00:22:32.075 killing process with pid 2302969 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2302969 00:22:32.075 [2024-07-26 01:58:12.980264] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:32.075 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2302969 00:22:32.075 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:32.075 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.075 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:32.075 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.075 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2304704 00:22:32.075 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:32.075 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2304704 00:22:32.075 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2304704 ']' 00:22:32.075 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.075 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.075 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.075 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.075 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.075 [2024-07-26 01:58:13.268897] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:32.075 [2024-07-26 01:58:13.268974] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.075 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.075 [2024-07-26 01:58:13.332760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.075 [2024-07-26 01:58:13.416649] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.075 [2024-07-26 01:58:13.416717] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.075 [2024-07-26 01:58:13.416730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.075 [2024-07-26 01:58:13.416740] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.075 [2024-07-26 01:58:13.416749] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.075 [2024-07-26 01:58:13.416781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.075 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.075 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:32.076 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:32.076 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:32.076 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.076 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.076 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.sj8rja32Au 00:22:32.076 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:32.076 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.sj8rja32Au 00:22:32.076 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:22:32.076 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:32.076 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:22:32.076 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:32.076 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.sj8rja32Au 00:22:32.076 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sj8rja32Au 00:22:32.076 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:32.076 [2024-07-26 01:58:13.819737] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.076 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:32.363 01:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:32.633 [2024-07-26 01:58:14.373192] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:32.633 [2024-07-26 01:58:14.373440] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.633 01:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:32.892 malloc0 00:22:32.892 01:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:33.150 01:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sj8rja32Au 00:22:33.409 [2024-07-26 01:58:15.183870] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:33.409 [2024-07-26 01:58:15.183914] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:33.409 [2024-07-26 01:58:15.183958] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:33.409 request: 00:22:33.409 { 00:22:33.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.409 "host": "nqn.2016-06.io.spdk:host1", 00:22:33.409 "psk": "/tmp/tmp.sj8rja32Au", 00:22:33.409 "method": "nvmf_subsystem_add_host", 00:22:33.409 "req_id": 1 00:22:33.409 } 00:22:33.409 Got JSON-RPC error response 00:22:33.409 response: 00:22:33.409 { 00:22:33.409 "code": -32603, 00:22:33.409 "message": "Internal error" 00:22:33.409 } 00:22:33.409 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:33.409 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:33.409 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:33.409 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:33.409 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 2304704 00:22:33.409 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2304704 ']' 00:22:33.409 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2304704 00:22:33.409 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:33.409 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.409 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2304704 00:22:33.409 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:33.409 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:33.409 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2304704' 00:22:33.409 killing process with pid 2304704 00:22:33.409 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2304704 00:22:33.409 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2304704 00:22:33.668 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.sj8rja32Au 00:22:33.668 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:33.668 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:33.668 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:33.668 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.668 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2304997 00:22:33.668 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:33.668 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2304997 00:22:33.668 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2304997 ']' 00:22:33.668 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.668 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:33.668 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.668 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:33.668 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.668 [2024-07-26 01:58:15.500279] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:33.668 [2024-07-26 01:58:15.500350] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.668 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.668 [2024-07-26 01:58:15.568626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.668 [2024-07-26 01:58:15.658367] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.668 [2024-07-26 01:58:15.658432] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.668 [2024-07-26 01:58:15.658458] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.668 [2024-07-26 01:58:15.658473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.668 [2024-07-26 01:58:15.658485] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.668 [2024-07-26 01:58:15.658514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.926 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:33.926 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:33.927 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.927 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.927 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.927 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.927 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.sj8rja32Au 00:22:33.927 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sj8rja32Au 00:22:33.927 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:34.185 [2024-07-26 01:58:16.022630] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.185 01:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:34.443 01:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:34.701 [2024-07-26 01:58:16.523986] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:34.701 [2024-07-26 01:58:16.524269] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.701 01:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:34.959 malloc0 00:22:34.959 01:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:35.217 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sj8rja32Au 00:22:35.476 [2024-07-26 01:58:17.248615] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:35.476 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2305164 00:22:35.476 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:35.476 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:35.476 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2305164 /var/tmp/bdevperf.sock 00:22:35.476 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2305164 ']' 00:22:35.476 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.476 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:35.476 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.476 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:35.476 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.476 [2024-07-26 01:58:17.309644] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:35.476 [2024-07-26 01:58:17.309728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2305164 ] 00:22:35.476 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.476 [2024-07-26 01:58:17.390775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.734 [2024-07-26 01:58:17.495684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.734 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.734 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:35.734 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sj8rja32Au 00:22:35.992 [2024-07-26 01:58:17.929822] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:35.992 [2024-07-26 01:58:17.929937] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:36.250 TLSTESTn1 00:22:36.250 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:36.509 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:36.509 "subsystems": [ 00:22:36.509 { 00:22:36.509 "subsystem": "keyring", 00:22:36.509 "config": [] 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "subsystem": "iobuf", 00:22:36.509 "config": [ 00:22:36.509 { 00:22:36.509 "method": "iobuf_set_options", 00:22:36.509 "params": { 00:22:36.509 "small_pool_count": 8192, 00:22:36.509 "large_pool_count": 1024, 00:22:36.509 "small_bufsize": 8192, 00:22:36.509 "large_bufsize": 135168 00:22:36.509 } 00:22:36.509 } 00:22:36.509 ] 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "subsystem": "sock", 00:22:36.509 "config": [ 00:22:36.509 { 00:22:36.509 "method": "sock_set_default_impl", 00:22:36.509 "params": { 00:22:36.509 "impl_name": "posix" 00:22:36.509 } 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "method": "sock_impl_set_options", 00:22:36.509 "params": { 00:22:36.509 "impl_name": "ssl", 00:22:36.509 "recv_buf_size": 4096, 00:22:36.509 "send_buf_size": 4096, 00:22:36.509 "enable_recv_pipe": true, 00:22:36.509 "enable_quickack": false, 00:22:36.509 "enable_placement_id": 0, 00:22:36.509 "enable_zerocopy_send_server": true, 00:22:36.509 "enable_zerocopy_send_client": false, 00:22:36.509 "zerocopy_threshold": 0, 00:22:36.509 "tls_version": 0, 00:22:36.509 "enable_ktls": false 00:22:36.509 } 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "method": "sock_impl_set_options", 00:22:36.509 "params": { 00:22:36.509 "impl_name": "posix", 00:22:36.509 "recv_buf_size": 2097152, 00:22:36.509 "send_buf_size": 2097152, 00:22:36.509 "enable_recv_pipe": true, 00:22:36.509 "enable_quickack": false, 00:22:36.509 "enable_placement_id": 0, 00:22:36.509 "enable_zerocopy_send_server": true, 00:22:36.509 "enable_zerocopy_send_client": false, 00:22:36.509 "zerocopy_threshold": 0, 00:22:36.509 "tls_version": 0, 00:22:36.509 "enable_ktls": false 00:22:36.509 } 00:22:36.509 } 00:22:36.509 ] 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "subsystem": "vmd", 00:22:36.509 "config": [] 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "subsystem": "accel", 00:22:36.509 "config": [ 00:22:36.509 { 00:22:36.509 "method": "accel_set_options", 00:22:36.509 "params": { 00:22:36.509 "small_cache_size": 128, 00:22:36.509 "large_cache_size": 16, 00:22:36.509 "task_count": 2048, 00:22:36.509 "sequence_count": 2048, 00:22:36.509 "buf_count": 2048 00:22:36.509 } 00:22:36.509 } 00:22:36.509 ] 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "subsystem": "bdev", 00:22:36.509 "config": [ 00:22:36.509 { 00:22:36.509 "method": "bdev_set_options", 00:22:36.509 "params": { 00:22:36.509 "bdev_io_pool_size": 65535, 00:22:36.509 "bdev_io_cache_size": 256, 00:22:36.509 "bdev_auto_examine": true, 00:22:36.509 "iobuf_small_cache_size": 128, 00:22:36.509 "iobuf_large_cache_size": 16 00:22:36.509 } 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "method": "bdev_raid_set_options", 00:22:36.509 "params": { 00:22:36.509 "process_window_size_kb": 1024, 00:22:36.509 "process_max_bandwidth_mb_sec": 0 00:22:36.509 } 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "method": "bdev_iscsi_set_options", 00:22:36.509 "params": { 00:22:36.509 "timeout_sec": 30 00:22:36.509 } 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "method": "bdev_nvme_set_options", 00:22:36.509 "params": { 00:22:36.509 "action_on_timeout": "none", 00:22:36.509 "timeout_us": 0, 00:22:36.509 "timeout_admin_us": 0, 00:22:36.509 "keep_alive_timeout_ms": 10000, 00:22:36.509 "arbitration_burst": 0, 00:22:36.509 "low_priority_weight": 0, 00:22:36.509 "medium_priority_weight": 0, 00:22:36.509 "high_priority_weight": 0, 00:22:36.509 "nvme_adminq_poll_period_us": 10000, 00:22:36.509 "nvme_ioq_poll_period_us": 0, 00:22:36.509 "io_queue_requests": 0, 00:22:36.509 "delay_cmd_submit": true, 00:22:36.509 "transport_retry_count": 4, 00:22:36.509 "bdev_retry_count": 3, 00:22:36.509 "transport_ack_timeout": 0, 00:22:36.509 "ctrlr_loss_timeout_sec": 0, 00:22:36.509 "reconnect_delay_sec": 0, 00:22:36.509 "fast_io_fail_timeout_sec": 0, 00:22:36.509 "disable_auto_failback": false, 00:22:36.509 "generate_uuids": false, 00:22:36.509 "transport_tos": 0, 00:22:36.509 "nvme_error_stat": false, 00:22:36.509 "rdma_srq_size": 0, 00:22:36.509 "io_path_stat": false, 00:22:36.509 "allow_accel_sequence": false, 00:22:36.509 "rdma_max_cq_size": 0, 00:22:36.509 "rdma_cm_event_timeout_ms": 0, 00:22:36.509 "dhchap_digests": [ 00:22:36.509 "sha256", 00:22:36.509 "sha384", 00:22:36.509 "sha512" 00:22:36.509 ], 00:22:36.509 "dhchap_dhgroups": [ 00:22:36.509 "null", 00:22:36.509 "ffdhe2048", 00:22:36.509 "ffdhe3072", 00:22:36.509 "ffdhe4096", 00:22:36.509 "ffdhe6144", 00:22:36.509 "ffdhe8192" 00:22:36.509 ] 00:22:36.509 } 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "method": "bdev_nvme_set_hotplug", 00:22:36.509 "params": { 00:22:36.509 "period_us": 100000, 00:22:36.509 "enable": false 00:22:36.509 } 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "method": "bdev_malloc_create", 00:22:36.509 "params": { 00:22:36.509 "name": "malloc0", 00:22:36.509 "num_blocks": 8192, 00:22:36.509 "block_size": 4096, 00:22:36.509 "physical_block_size": 4096, 00:22:36.509 "uuid": "d5695daa-c484-4093-9e1a-b56921af573b", 00:22:36.509 "optimal_io_boundary": 0, 00:22:36.509 "md_size": 0, 00:22:36.509 "dif_type": 0, 00:22:36.509 "dif_is_head_of_md": false, 00:22:36.509 "dif_pi_format": 0 00:22:36.509 } 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "method": "bdev_wait_for_examine" 00:22:36.509 } 00:22:36.509 ] 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "subsystem": "nbd", 00:22:36.509 "config": [] 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "subsystem": "scheduler", 00:22:36.509 "config": [ 00:22:36.509 { 00:22:36.509 "method": "framework_set_scheduler", 00:22:36.509 "params": { 00:22:36.509 "name": "static" 00:22:36.509 } 00:22:36.509 } 00:22:36.509 ] 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "subsystem": "nvmf", 00:22:36.509 "config": [ 00:22:36.509 { 00:22:36.509 "method": "nvmf_set_config", 00:22:36.509 "params": { 00:22:36.509 "discovery_filter": "match_any", 00:22:36.509 "admin_cmd_passthru": { 00:22:36.509 "identify_ctrlr": false 00:22:36.509 } 00:22:36.509 } 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "method": "nvmf_set_max_subsystems", 00:22:36.509 "params": { 00:22:36.509 "max_subsystems": 1024 00:22:36.509 } 00:22:36.509 }, 00:22:36.509 { 00:22:36.509 "method": "nvmf_set_crdt", 00:22:36.509 "params": { 00:22:36.509 "crdt1": 0, 00:22:36.510 "crdt2": 0, 00:22:36.510 "crdt3": 0 00:22:36.510 } 00:22:36.510 }, 00:22:36.510 { 00:22:36.510 "method": "nvmf_create_transport", 00:22:36.510 "params": { 00:22:36.510 "trtype": "TCP", 00:22:36.510 "max_queue_depth": 128, 00:22:36.510 "max_io_qpairs_per_ctrlr": 127, 00:22:36.510 "in_capsule_data_size": 4096, 00:22:36.510 "max_io_size": 131072, 00:22:36.510 "io_unit_size": 131072, 00:22:36.510 "max_aq_depth": 128, 00:22:36.510 "num_shared_buffers": 511, 00:22:36.510 "buf_cache_size": 4294967295, 00:22:36.510 "dif_insert_or_strip": false, 00:22:36.510 "zcopy": false, 00:22:36.510 "c2h_success": false, 00:22:36.510 "sock_priority": 0, 00:22:36.510 "abort_timeout_sec": 1, 00:22:36.510 "ack_timeout": 0, 00:22:36.510 "data_wr_pool_size": 0 00:22:36.510 } 00:22:36.510 }, 00:22:36.510 { 00:22:36.510 "method": "nvmf_create_subsystem", 00:22:36.510 "params": { 00:22:36.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.510 "allow_any_host": false, 00:22:36.510 "serial_number": "SPDK00000000000001", 00:22:36.510 "model_number": "SPDK bdev Controller", 00:22:36.510 "max_namespaces": 10, 00:22:36.510 "min_cntlid": 1, 00:22:36.510 "max_cntlid": 65519, 00:22:36.510 "ana_reporting": false 00:22:36.510 } 00:22:36.510 }, 00:22:36.510 { 00:22:36.510 "method": "nvmf_subsystem_add_host", 00:22:36.510 "params": { 00:22:36.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.510 "host": "nqn.2016-06.io.spdk:host1", 00:22:36.510 "psk": "/tmp/tmp.sj8rja32Au" 00:22:36.510 } 00:22:36.510 }, 00:22:36.510 { 00:22:36.510 "method": "nvmf_subsystem_add_ns", 00:22:36.510 "params": { 00:22:36.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.510 "namespace": { 00:22:36.510 "nsid": 1, 00:22:36.510 "bdev_name": "malloc0", 00:22:36.510 "nguid": "D5695DAAC48440939E1AB56921AF573B", 00:22:36.510 "uuid": "d5695daa-c484-4093-9e1a-b56921af573b", 00:22:36.510 "no_auto_visible": false 00:22:36.510 } 00:22:36.510 } 00:22:36.510 }, 00:22:36.510 { 00:22:36.510 "method": "nvmf_subsystem_add_listener", 00:22:36.510 "params": { 00:22:36.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.510 "listen_address": { 00:22:36.510 "trtype": "TCP", 00:22:36.510 "adrfam": "IPv4", 00:22:36.510 "traddr": "10.0.0.2", 00:22:36.510 "trsvcid": "4420" 00:22:36.510 }, 00:22:36.510 "secure_channel": true 00:22:36.510 } 00:22:36.510 } 00:22:36.510 ] 00:22:36.510 } 00:22:36.510 ] 00:22:36.510 }' 00:22:36.510 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:36.768 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:36.768 "subsystems": [ 00:22:36.768 { 00:22:36.768 "subsystem": "keyring", 00:22:36.768 "config": [] 00:22:36.768 }, 00:22:36.768 { 00:22:36.768 "subsystem": "iobuf", 00:22:36.768 "config": [ 00:22:36.768 { 00:22:36.768 "method": "iobuf_set_options", 00:22:36.768 "params": { 00:22:36.768 "small_pool_count": 8192, 00:22:36.768 "large_pool_count": 1024, 00:22:36.768 "small_bufsize": 8192, 00:22:36.768 "large_bufsize": 135168 00:22:36.768 } 00:22:36.768 } 00:22:36.768 ] 00:22:36.768 }, 00:22:36.768 { 00:22:36.768 "subsystem": "sock", 00:22:36.768 "config": [ 00:22:36.768 { 00:22:36.768 "method": "sock_set_default_impl", 00:22:36.768 "params": { 00:22:36.768 "impl_name": "posix" 00:22:36.768 } 00:22:36.768 }, 00:22:36.768 { 00:22:36.768 "method": "sock_impl_set_options", 00:22:36.768 "params": { 00:22:36.768 "impl_name": "ssl", 00:22:36.768 "recv_buf_size": 4096, 00:22:36.768 "send_buf_size": 4096, 00:22:36.768 "enable_recv_pipe": true, 00:22:36.768 "enable_quickack": false, 00:22:36.768 "enable_placement_id": 0, 00:22:36.768 "enable_zerocopy_send_server": true, 00:22:36.768 "enable_zerocopy_send_client": false, 00:22:36.768 "zerocopy_threshold": 0, 00:22:36.768 "tls_version": 0, 00:22:36.768 "enable_ktls": false 00:22:36.768 } 00:22:36.768 }, 00:22:36.768 { 00:22:36.768 "method": "sock_impl_set_options", 00:22:36.768 "params": { 00:22:36.768 "impl_name": "posix", 00:22:36.768 "recv_buf_size": 2097152, 00:22:36.768 "send_buf_size": 2097152, 00:22:36.768 "enable_recv_pipe": true, 00:22:36.768 "enable_quickack": false, 00:22:36.768 "enable_placement_id": 0, 00:22:36.768 "enable_zerocopy_send_server": true, 00:22:36.768 "enable_zerocopy_send_client": false, 00:22:36.768 "zerocopy_threshold": 0, 00:22:36.768 "tls_version": 0, 00:22:36.768 "enable_ktls": false 00:22:36.768 } 00:22:36.768 } 00:22:36.768 ] 00:22:36.768 }, 00:22:36.768 { 00:22:36.768 "subsystem": "vmd", 00:22:36.768 "config": [] 00:22:36.768 }, 00:22:36.768 { 00:22:36.768 "subsystem": "accel", 00:22:36.768 "config": [ 00:22:36.768 { 00:22:36.768 "method": "accel_set_options", 00:22:36.768 "params": { 00:22:36.768 "small_cache_size": 128, 00:22:36.768 "large_cache_size": 16, 00:22:36.768 "task_count": 2048, 00:22:36.768 "sequence_count": 2048, 00:22:36.768 "buf_count": 2048 00:22:36.768 } 00:22:36.768 } 00:22:36.768 ] 00:22:36.768 }, 00:22:36.768 { 00:22:36.768 "subsystem": "bdev", 00:22:36.768 "config": [ 00:22:36.768 { 00:22:36.768 "method": "bdev_set_options", 00:22:36.768 "params": { 00:22:36.768 "bdev_io_pool_size": 65535, 00:22:36.768 "bdev_io_cache_size": 256, 00:22:36.768 "bdev_auto_examine": true, 00:22:36.768 "iobuf_small_cache_size": 128, 00:22:36.768 "iobuf_large_cache_size": 16 00:22:36.768 } 00:22:36.768 }, 00:22:36.768 { 00:22:36.768 "method": "bdev_raid_set_options", 00:22:36.768 "params": { 00:22:36.768 "process_window_size_kb": 1024, 00:22:36.768 "process_max_bandwidth_mb_sec": 0 00:22:36.768 } 00:22:36.768 }, 00:22:36.768 { 00:22:36.768 "method": "bdev_iscsi_set_options", 00:22:36.768 "params": { 00:22:36.768 "timeout_sec": 30 00:22:36.768 } 00:22:36.768 }, 00:22:36.768 { 00:22:36.768 "method": "bdev_nvme_set_options", 00:22:36.768 "params": { 00:22:36.768 "action_on_timeout": "none", 00:22:36.768 "timeout_us": 0, 00:22:36.768 "timeout_admin_us": 0, 00:22:36.768 "keep_alive_timeout_ms": 10000, 00:22:36.768 "arbitration_burst": 0, 00:22:36.768 "low_priority_weight": 0, 00:22:36.768 "medium_priority_weight": 0, 00:22:36.768 "high_priority_weight": 0, 00:22:36.768 "nvme_adminq_poll_period_us": 10000, 00:22:36.768 "nvme_ioq_poll_period_us": 0, 00:22:36.768 "io_queue_requests": 512, 00:22:36.768 "delay_cmd_submit": true, 00:22:36.768 "transport_retry_count": 4, 00:22:36.768 "bdev_retry_count": 3, 00:22:36.768 "transport_ack_timeout": 0, 00:22:36.768 "ctrlr_loss_timeout_sec": 0, 00:22:36.768 "reconnect_delay_sec": 0, 00:22:36.768 "fast_io_fail_timeout_sec": 0, 00:22:36.768 "disable_auto_failback": false, 00:22:36.768 "generate_uuids": false, 00:22:36.768 "transport_tos": 0, 00:22:36.768 "nvme_error_stat": false, 00:22:36.768 "rdma_srq_size": 0, 00:22:36.768 "io_path_stat": false, 00:22:36.768 "allow_accel_sequence": false, 00:22:36.768 "rdma_max_cq_size": 0, 00:22:36.768 "rdma_cm_event_timeout_ms": 0, 00:22:36.768 "dhchap_digests": [ 00:22:36.768 "sha256", 00:22:36.768 "sha384", 00:22:36.768 "sha512" 00:22:36.768 ], 00:22:36.768 "dhchap_dhgroups": [ 00:22:36.768 "null", 00:22:36.768 "ffdhe2048", 00:22:36.768 "ffdhe3072", 00:22:36.768 "ffdhe4096", 00:22:36.769 "ffdhe6144", 00:22:36.769 "ffdhe8192" 00:22:36.769 ] 00:22:36.769 } 00:22:36.769 }, 00:22:36.769 { 00:22:36.769 "method": "bdev_nvme_attach_controller", 00:22:36.769 "params": { 00:22:36.769 "name": "TLSTEST", 00:22:36.769 "trtype": "TCP", 00:22:36.769 "adrfam": "IPv4", 00:22:36.769 "traddr": "10.0.0.2", 00:22:36.769 "trsvcid": "4420", 00:22:36.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.769 "prchk_reftag": false, 00:22:36.769 "prchk_guard": false, 00:22:36.769 "ctrlr_loss_timeout_sec": 0, 00:22:36.769 "reconnect_delay_sec": 0, 00:22:36.769 "fast_io_fail_timeout_sec": 0, 00:22:36.769 "psk": "/tmp/tmp.sj8rja32Au", 00:22:36.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:36.769 "hdgst": false, 00:22:36.769 "ddgst": false 00:22:36.769 } 00:22:36.769 }, 00:22:36.769 { 00:22:36.769 "method": "bdev_nvme_set_hotplug", 00:22:36.769 "params": { 00:22:36.769 "period_us": 100000, 00:22:36.769 "enable": false 00:22:36.769 } 00:22:36.769 }, 00:22:36.769 { 00:22:36.769 "method": "bdev_wait_for_examine" 00:22:36.769 } 00:22:36.769 ] 00:22:36.769 }, 00:22:36.769 { 00:22:36.769 "subsystem": "nbd", 00:22:36.769 "config": [] 00:22:36.769 } 00:22:36.769 ] 00:22:36.769 }' 00:22:36.769 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 2305164 00:22:36.769 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2305164 ']' 00:22:36.769 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2305164 00:22:36.769 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:36.769 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:36.769 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2305164 00:22:36.769 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:36.769 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:36.769 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2305164' 00:22:36.769 killing process with pid 2305164 00:22:36.769 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2305164 00:22:36.769 Received shutdown signal, test time was about 10.000000 seconds 00:22:36.769 00:22:36.769 Latency(us) 00:22:36.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.769 =================================================================================================================== 00:22:36.769 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:36.769 [2024-07-26 01:58:18.743991] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:36.769 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2305164 00:22:37.026 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 2304997 00:22:37.026 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2304997 ']' 00:22:37.026 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2304997 00:22:37.026 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:37.026 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:37.026 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2304997 00:22:37.026 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:37.026 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:37.026 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2304997' 00:22:37.027 killing process with pid 2304997 00:22:37.027 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2304997 00:22:37.027 [2024-07-26 01:58:18.967413] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:37.027 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2304997 00:22:37.286 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:37.286 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:37.286 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:37.286 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:37.286 "subsystems": [ 00:22:37.286 { 00:22:37.286 "subsystem": "keyring", 00:22:37.286 "config": [] 00:22:37.286 }, 00:22:37.286 { 00:22:37.286 "subsystem": "iobuf", 00:22:37.286 "config": [ 00:22:37.286 { 00:22:37.286 "method": "iobuf_set_options", 00:22:37.286 "params": { 00:22:37.286 "small_pool_count": 8192, 00:22:37.286 "large_pool_count": 1024, 00:22:37.286 "small_bufsize": 8192, 00:22:37.286 "large_bufsize": 135168 00:22:37.286 } 00:22:37.286 } 00:22:37.286 ] 00:22:37.286 }, 00:22:37.286 { 00:22:37.286 "subsystem": "sock", 00:22:37.286 "config": [ 00:22:37.286 { 00:22:37.286 "method": "sock_set_default_impl", 00:22:37.286 "params": { 00:22:37.286 "impl_name": "posix" 00:22:37.286 } 00:22:37.286 }, 00:22:37.286 { 00:22:37.286 "method": "sock_impl_set_options", 00:22:37.286 "params": { 00:22:37.286 "impl_name": "ssl", 00:22:37.286 "recv_buf_size": 4096, 00:22:37.286 "send_buf_size": 4096, 00:22:37.286 "enable_recv_pipe": true, 00:22:37.286 "enable_quickack": false, 00:22:37.286 "enable_placement_id": 0, 00:22:37.286 "enable_zerocopy_send_server": true, 00:22:37.286 "enable_zerocopy_send_client": false, 00:22:37.286 "zerocopy_threshold": 0, 00:22:37.286 "tls_version": 0, 00:22:37.286 "enable_ktls": false 00:22:37.286 } 00:22:37.286 }, 00:22:37.286 { 00:22:37.286 "method": "sock_impl_set_options", 00:22:37.286 "params": { 00:22:37.286 "impl_name": "posix", 00:22:37.286 "recv_buf_size": 2097152, 00:22:37.286 "send_buf_size": 2097152, 00:22:37.286 "enable_recv_pipe": true, 00:22:37.286 "enable_quickack": false, 00:22:37.286 "enable_placement_id": 0, 00:22:37.286 "enable_zerocopy_send_server": true, 00:22:37.286 "enable_zerocopy_send_client": false, 00:22:37.286 "zerocopy_threshold": 0, 00:22:37.286 "tls_version": 0, 00:22:37.286 "enable_ktls": false 00:22:37.286 } 00:22:37.286 } 00:22:37.286 ] 00:22:37.286 }, 00:22:37.286 { 00:22:37.286 "subsystem": "vmd", 00:22:37.286 "config": [] 00:22:37.286 }, 00:22:37.286 { 00:22:37.286 "subsystem": "accel", 00:22:37.286 "config": [ 00:22:37.286 { 00:22:37.286 "method": "accel_set_options", 00:22:37.286 "params": { 00:22:37.286 "small_cache_size": 128, 00:22:37.286 "large_cache_size": 16, 00:22:37.286 "task_count": 2048, 00:22:37.286 "sequence_count": 2048, 00:22:37.286 "buf_count": 2048 00:22:37.286 } 00:22:37.286 } 00:22:37.286 ] 00:22:37.286 }, 00:22:37.286 { 00:22:37.286 "subsystem": "bdev", 00:22:37.286 "config": [ 00:22:37.286 { 00:22:37.286 "method": "bdev_set_options", 00:22:37.286 "params": { 00:22:37.286 "bdev_io_pool_size": 65535, 00:22:37.286 "bdev_io_cache_size": 256, 00:22:37.286 "bdev_auto_examine": true, 00:22:37.286 "iobuf_small_cache_size": 128, 00:22:37.286 "iobuf_large_cache_size": 16 00:22:37.286 } 00:22:37.286 }, 00:22:37.286 { 00:22:37.286 "method": "bdev_raid_set_options", 00:22:37.286 "params": { 00:22:37.286 "process_window_size_kb": 1024, 00:22:37.286 "process_max_bandwidth_mb_sec": 0 00:22:37.286 } 00:22:37.286 }, 00:22:37.286 { 00:22:37.286 "method": "bdev_iscsi_set_options", 00:22:37.286 "params": { 00:22:37.286 "timeout_sec": 30 00:22:37.286 } 00:22:37.286 }, 00:22:37.286 { 00:22:37.286 "method": "bdev_nvme_set_options", 00:22:37.286 "params": { 00:22:37.286 "action_on_timeout": "none", 00:22:37.286 "timeout_us": 0, 00:22:37.286 "timeout_admin_us": 0, 00:22:37.286 "keep_alive_timeout_ms": 10000, 00:22:37.286 "arbitration_burst": 0, 00:22:37.286 "low_priority_weight": 0, 00:22:37.286 "medium_priority_weight": 0, 00:22:37.286 "high_priority_weight": 0, 00:22:37.286 "nvme_adminq_poll_period_us": 10000, 00:22:37.286 "nvme_ioq_poll_period_us": 0, 00:22:37.286 "io_queue_requests": 0, 00:22:37.286 "delay_cmd_submit": true, 00:22:37.286 "transport_retry_count": 4, 00:22:37.286 "bdev_retry_count": 3, 00:22:37.286 "transport_ack_timeout": 0, 00:22:37.286 "ctrlr_loss_timeout_sec": 0, 00:22:37.286 "reconnect_delay_sec": 0, 00:22:37.286 "fast_io_fail_timeout_sec": 0, 00:22:37.286 "disable_auto_failback": false, 00:22:37.286 "generate_uuids": false, 00:22:37.286 "transport_tos": 0, 00:22:37.286 "nvme_error_stat": false, 00:22:37.286 "rdma_srq_size": 0, 00:22:37.286 "io_path_stat": false, 00:22:37.286 "allow_accel_sequence": false, 00:22:37.286 "rdma_max_cq_size": 0, 00:22:37.286 "rdma_cm_event_timeout_ms": 0, 00:22:37.286 "dhchap_digests": [ 00:22:37.286 "sha256", 00:22:37.286 "sha384", 00:22:37.286 "sha512" 00:22:37.286 ], 00:22:37.286 "dhchap_dhgroups": [ 00:22:37.286 "null", 00:22:37.286 "ffdhe2048", 00:22:37.286 "ffdhe3072", 00:22:37.286 "ffdhe4096", 00:22:37.286 "ffdhe6144", 00:22:37.286 "ffdhe8192" 00:22:37.286 ] 00:22:37.286 } 00:22:37.286 }, 00:22:37.286 { 00:22:37.286 "method": "bdev_nvme_set_hotplug", 00:22:37.286 "params": { 00:22:37.286 "period_us": 100000, 00:22:37.286 "enable": false 00:22:37.286 } 00:22:37.286 }, 00:22:37.286 { 00:22:37.286 "method": "bdev_malloc_create", 00:22:37.286 "params": { 00:22:37.286 "name": "malloc0", 00:22:37.286 "num_blocks": 8192, 00:22:37.286 "block_size": 4096, 00:22:37.286 "physical_block_size": 4096, 00:22:37.286 "uuid": "d5695daa-c484-4093-9e1a-b56921af573b", 00:22:37.286 "optimal_io_boundary": 0, 00:22:37.286 "md_size": 0, 00:22:37.286 "dif_type": 0, 00:22:37.286 "dif_is_head_of_md": false, 00:22:37.286 "dif_pi_format": 0 00:22:37.286 } 00:22:37.286 }, 00:22:37.286 { 00:22:37.286 "method": "bdev_wait_for_examine" 00:22:37.286 } 00:22:37.286 ] 00:22:37.286 }, 00:22:37.286 { 00:22:37.286 "subsystem": "nbd", 00:22:37.286 "config": [] 00:22:37.286 }, 00:22:37.286 { 00:22:37.287 "subsystem": "scheduler", 00:22:37.287 "config": [ 00:22:37.287 { 00:22:37.287 "method": "framework_set_scheduler", 00:22:37.287 "params": { 00:22:37.287 "name": "static" 00:22:37.287 } 00:22:37.287 } 00:22:37.287 ] 00:22:37.287 }, 00:22:37.287 { 00:22:37.287 "subsystem": "nvmf", 00:22:37.287 "config": [ 00:22:37.287 { 00:22:37.287 "method": "nvmf_set_config", 00:22:37.287 "params": { 00:22:37.287 "discovery_filter": "match_any", 00:22:37.287 "admin_cmd_passthru": { 00:22:37.287 "identify_ctrlr": false 00:22:37.287 } 00:22:37.287 } 00:22:37.287 }, 00:22:37.287 { 00:22:37.287 "method": "nvmf_set_max_subsystems", 00:22:37.287 "params": { 00:22:37.287 "max_subsystems": 1024 00:22:37.287 } 00:22:37.287 }, 00:22:37.287 { 00:22:37.287 "method": "nvmf_set_crdt", 00:22:37.287 "params": { 00:22:37.287 "crdt1": 0, 00:22:37.287 "crdt2": 0, 00:22:37.287 "crdt3": 0 00:22:37.287 } 00:22:37.287 }, 00:22:37.287 { 00:22:37.287 "method": "nvmf_create_transport", 00:22:37.287 "params": { 00:22:37.287 "trtype": "TCP", 00:22:37.287 "max_queue_depth": 128, 00:22:37.287 "max_io_qpairs_per_ctrlr": 127, 00:22:37.287 "in_capsule_data_size": 4096, 00:22:37.287 "max_io_size": 131072, 00:22:37.287 "io_unit_size": 131072, 00:22:37.287 "max_aq_depth": 128, 00:22:37.287 "num_shared_buffers": 511, 00:22:37.287 "buf_cache_size": 4294967295, 00:22:37.287 "dif_insert_or_strip": false, 00:22:37.287 "zcopy": false, 00:22:37.287 "c2h_success": false, 00:22:37.287 "sock_priority": 0, 00:22:37.287 "abort_timeout_sec": 1, 00:22:37.287 "ack_timeout": 0, 00:22:37.287 "data_wr_pool_size": 0 00:22:37.287 } 00:22:37.287 }, 00:22:37.287 { 00:22:37.287 "method": "nvmf_create_subsystem", 00:22:37.287 "params": { 00:22:37.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.287 "allow_any_host": false, 00:22:37.287 "serial_number": "SPDK00000000000001", 00:22:37.287 "model_number": "SPDK bdev Controller", 00:22:37.287 "max_namespaces": 10, 00:22:37.287 "min_cntlid": 1, 00:22:37.287 "max_cntlid": 65519, 00:22:37.287 "ana_reporting": false 00:22:37.287 } 00:22:37.287 }, 00:22:37.287 { 00:22:37.287 "method": "nvmf_subsystem_add_host", 00:22:37.287 "params": { 00:22:37.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.287 "host": "nqn.2016-06.io.spdk:host1", 00:22:37.287 "psk": "/tmp/tmp.sj8rja32Au" 00:22:37.287 } 00:22:37.287 }, 00:22:37.287 { 00:22:37.287 "method": "nvmf_subsystem_add_ns", 00:22:37.287 "params": { 00:22:37.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.287 "namespace": { 00:22:37.287 "nsid": 1, 00:22:37.287 "bdev_name": "malloc0", 00:22:37.287 "nguid": "D5695DAAC48440939E1AB56921AF573B", 00:22:37.287 "uuid": "d5695daa-c484-4093-9e1a-b56921af573b", 00:22:37.287 "no_auto_visible": false 00:22:37.287 } 00:22:37.287 } 00:22:37.287 }, 00:22:37.287 { 00:22:37.287 "method": "nvmf_subsystem_add_listener", 00:22:37.287 "params": { 00:22:37.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.287 "listen_address": { 00:22:37.287 "trtype": "TCP", 00:22:37.287 "adrfam": "IPv4", 00:22:37.287 "traddr": "10.0.0.2", 00:22:37.287 "trsvcid": "4420" 00:22:37.287 }, 00:22:37.287 "secure_channel": true 00:22:37.287 } 00:22:37.287 } 00:22:37.287 ] 00:22:37.287 } 00:22:37.287 ] 00:22:37.287 }' 00:22:37.287 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.287 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2305440 00:22:37.287 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:37.287 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2305440 00:22:37.287 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2305440 ']' 00:22:37.287 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.287 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:37.287 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.287 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:37.287 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.287 [2024-07-26 01:58:19.266499] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:37.287 [2024-07-26 01:58:19.266594] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.546 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.546 [2024-07-26 01:58:19.334681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.546 [2024-07-26 01:58:19.421634] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.546 [2024-07-26 01:58:19.421700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.546 [2024-07-26 01:58:19.421725] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.546 [2024-07-26 01:58:19.421739] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.546 [2024-07-26 01:58:19.421751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.546 [2024-07-26 01:58:19.421840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.805 [2024-07-26 01:58:19.659030] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.805 [2024-07-26 01:58:19.683854] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:37.805 [2024-07-26 01:58:19.699926] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:37.805 [2024-07-26 01:58:19.700203] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.371 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:38.371 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:38.371 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.372 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.372 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.372 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.372 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2305591 00:22:38.372 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2305591 /var/tmp/bdevperf.sock 00:22:38.372 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2305591 ']' 00:22:38.372 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.372 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:38.372 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:38.372 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.372 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:38.372 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:38.372 "subsystems": [ 00:22:38.372 { 00:22:38.372 "subsystem": "keyring", 00:22:38.372 "config": [] 00:22:38.372 }, 00:22:38.372 { 00:22:38.372 "subsystem": "iobuf", 00:22:38.372 "config": [ 00:22:38.372 { 00:22:38.372 "method": "iobuf_set_options", 00:22:38.372 "params": { 00:22:38.372 "small_pool_count": 8192, 00:22:38.372 "large_pool_count": 1024, 00:22:38.372 "small_bufsize": 8192, 00:22:38.372 "large_bufsize": 135168 00:22:38.372 } 00:22:38.372 } 00:22:38.372 ] 00:22:38.372 }, 00:22:38.372 { 00:22:38.372 "subsystem": "sock", 00:22:38.372 "config": [ 00:22:38.372 { 00:22:38.372 "method": "sock_set_default_impl", 00:22:38.372 "params": { 00:22:38.372 "impl_name": "posix" 00:22:38.372 } 00:22:38.372 }, 00:22:38.372 { 00:22:38.372 "method": "sock_impl_set_options", 00:22:38.372 "params": { 00:22:38.372 "impl_name": "ssl", 00:22:38.372 "recv_buf_size": 4096, 00:22:38.372 "send_buf_size": 4096, 00:22:38.372 "enable_recv_pipe": true, 00:22:38.372 "enable_quickack": false, 00:22:38.372 "enable_placement_id": 0, 00:22:38.372 "enable_zerocopy_send_server": true, 00:22:38.372 "enable_zerocopy_send_client": false, 00:22:38.372 "zerocopy_threshold": 0, 00:22:38.372 "tls_version": 0, 00:22:38.372 "enable_ktls": false 00:22:38.372 } 00:22:38.372 }, 00:22:38.372 { 00:22:38.372 "method": "sock_impl_set_options", 00:22:38.372 "params": { 00:22:38.372 "impl_name": "posix", 00:22:38.372 "recv_buf_size": 2097152, 00:22:38.372 "send_buf_size": 2097152, 00:22:38.372 "enable_recv_pipe": true, 00:22:38.372 "enable_quickack": false, 00:22:38.372 "enable_placement_id": 0, 00:22:38.372 "enable_zerocopy_send_server": true, 00:22:38.372 "enable_zerocopy_send_client": false, 00:22:38.372 "zerocopy_threshold": 0, 00:22:38.372 "tls_version": 0, 00:22:38.372 "enable_ktls": false 00:22:38.372 } 00:22:38.372 } 00:22:38.372 ] 00:22:38.372 }, 00:22:38.372 { 00:22:38.372 "subsystem": "vmd", 00:22:38.372 "config": [] 00:22:38.372 }, 00:22:38.372 { 00:22:38.372 "subsystem": "accel", 00:22:38.372 "config": [ 00:22:38.372 { 00:22:38.372 "method": "accel_set_options", 00:22:38.372 "params": { 00:22:38.372 "small_cache_size": 128, 00:22:38.372 "large_cache_size": 16, 00:22:38.372 "task_count": 2048, 00:22:38.372 "sequence_count": 2048, 00:22:38.372 "buf_count": 2048 00:22:38.372 } 00:22:38.372 } 00:22:38.372 ] 00:22:38.372 }, 00:22:38.372 { 00:22:38.372 "subsystem": "bdev", 00:22:38.372 "config": [ 00:22:38.372 { 00:22:38.372 "method": "bdev_set_options", 00:22:38.372 "params": { 00:22:38.372 "bdev_io_pool_size": 65535, 00:22:38.372 "bdev_io_cache_size": 256, 00:22:38.372 "bdev_auto_examine": true, 00:22:38.372 "iobuf_small_cache_size": 128, 00:22:38.372 "iobuf_large_cache_size": 16 00:22:38.372 } 00:22:38.372 }, 00:22:38.372 { 00:22:38.372 "method": "bdev_raid_set_options", 00:22:38.372 "params": { 00:22:38.372 "process_window_size_kb": 1024, 00:22:38.372 "process_max_bandwidth_mb_sec": 0 00:22:38.372 } 00:22:38.372 }, 00:22:38.372 { 00:22:38.372 "method": "bdev_iscsi_set_options", 00:22:38.372 "params": { 00:22:38.372 "timeout_sec": 30 00:22:38.372 } 00:22:38.372 }, 00:22:38.372 { 00:22:38.372 "method": "bdev_nvme_set_options", 00:22:38.372 "params": { 00:22:38.372 "action_on_timeout": "none", 00:22:38.372 "timeout_us": 0, 00:22:38.372 "timeout_admin_us": 0, 00:22:38.372 "keep_alive_timeout_ms": 10000, 00:22:38.372 "arbitration_burst": 0, 00:22:38.372 "low_priority_weight": 0, 00:22:38.372 "medium_priority_weight": 0, 00:22:38.372 "high_priority_weight": 0, 00:22:38.372 "nvme_adminq_poll_period_us": 10000, 00:22:38.372 "nvme_ioq_poll_period_us": 0, 00:22:38.372 "io_queue_requests": 512, 00:22:38.372 "delay_cmd_submit": true, 00:22:38.372 "transport_retry_count": 4, 00:22:38.372 "bdev_retry_count": 3, 00:22:38.372 "transport_ack_timeout": 0, 00:22:38.372 "ctrlr_loss_timeout_sec": 0, 00:22:38.372 "reconnect_delay_sec": 0, 00:22:38.372 "fast_io_fail_timeout_sec": 0, 00:22:38.372 "disable_auto_failback": false, 00:22:38.372 "generate_uuids": false, 00:22:38.372 "transport_tos": 0, 00:22:38.372 "nvme_error_stat": false, 00:22:38.372 "rdma_srq_size": 0, 00:22:38.372 "io_path_stat": false, 00:22:38.372 "allow_accel_sequence": false, 00:22:38.372 "rdma_max_cq_size": 0, 00:22:38.372 "rdma_cm_event_timeout_ms": 0, 00:22:38.372 "dhchap_digests": [ 00:22:38.372 "sha256", 00:22:38.372 "sha384", 00:22:38.372 "sha512" 00:22:38.372 ], 00:22:38.372 "dhchap_dhgroups": [ 00:22:38.372 "null", 00:22:38.372 "ffdhe2048", 00:22:38.372 "ffdhe3072", 00:22:38.372 "ffdhe4096", 00:22:38.372 "ffdhe6144", 00:22:38.372 "ffdhe8192" 00:22:38.372 ] 00:22:38.372 } 00:22:38.372 }, 00:22:38.372 { 00:22:38.372 "method": "bdev_nvme_attach_controller", 00:22:38.372 "params": { 00:22:38.372 "name": "TLSTEST", 00:22:38.372 "trtype": "TCP", 00:22:38.372 "adrfam": "IPv4", 00:22:38.372 "traddr": "10.0.0.2", 00:22:38.372 "trsvcid": "4420", 00:22:38.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.373 "prchk_reftag": false, 00:22:38.373 "prchk_guard": false, 00:22:38.373 "ctrlr_loss_timeout_sec": 0, 00:22:38.373 "reconnect_delay_sec": 0, 00:22:38.373 "fast_io_fail_timeout_sec": 0, 00:22:38.373 "psk": "/tmp/tmp.sj8rja32Au", 00:22:38.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:38.373 "hdgst": false, 00:22:38.373 "ddgst": false 00:22:38.373 } 00:22:38.373 }, 00:22:38.373 { 00:22:38.373 "method": "bdev_nvme_set_hotplug", 00:22:38.373 "params": { 00:22:38.373 "period_us": 100000, 00:22:38.373 "enable": false 00:22:38.373 } 00:22:38.373 }, 00:22:38.373 { 00:22:38.373 "method": "bdev_wait_for_examine" 00:22:38.373 } 00:22:38.373 ] 00:22:38.373 }, 00:22:38.373 { 00:22:38.373 "subsystem": "nbd", 00:22:38.373 "config": [] 00:22:38.373 } 00:22:38.373 ] 00:22:38.373 }' 00:22:38.373 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.373 [2024-07-26 01:58:20.269513] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:38.373 [2024-07-26 01:58:20.269601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2305591 ] 00:22:38.373 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.373 [2024-07-26 01:58:20.328297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.631 [2024-07-26 01:58:20.415509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.631 [2024-07-26 01:58:20.573281] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:38.631 [2024-07-26 01:58:20.573412] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:39.564 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:39.564 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:39.564 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:39.564 Running I/O for 10 seconds... 00:22:49.523 00:22:49.523 Latency(us) 00:22:49.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.523 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:49.523 Verification LBA range: start 0x0 length 0x2000 00:22:49.523 TLSTESTn1 : 10.03 3542.29 13.84 0.00 0.00 36057.94 5971.06 40001.23 00:22:49.523 =================================================================================================================== 00:22:49.523 Total : 3542.29 13.84 0.00 0.00 36057.94 5971.06 40001.23 00:22:49.523 0 00:22:49.523 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:49.523 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 2305591 00:22:49.523 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2305591 ']' 00:22:49.523 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2305591 00:22:49.523 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:49.523 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:49.523 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2305591 00:22:49.523 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:49.523 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:49.523 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2305591' 00:22:49.523 killing process with pid 2305591 00:22:49.523 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2305591 00:22:49.523 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.523 00:22:49.523 Latency(us) 00:22:49.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.523 =================================================================================================================== 00:22:49.523 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:49.523 [2024-07-26 01:58:31.433517] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:49.523 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2305591 00:22:49.781 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 2305440 00:22:49.781 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2305440 ']' 00:22:49.781 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2305440 00:22:49.781 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:49.781 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:49.781 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2305440 00:22:49.781 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:49.781 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:49.781 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2305440' 00:22:49.781 killing process with pid 2305440 00:22:49.781 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2305440 00:22:49.781 [2024-07-26 01:58:31.690328] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:49.781 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2305440 00:22:50.038 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:50.038 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:50.038 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:50.038 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.038 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2306919 00:22:50.038 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:50.038 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2306919 00:22:50.038 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2306919 ']' 00:22:50.038 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.038 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:50.038 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.038 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:50.038 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.038 [2024-07-26 01:58:31.987827] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:50.038 [2024-07-26 01:58:31.987907] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.038 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.296 [2024-07-26 01:58:32.057230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.296 [2024-07-26 01:58:32.147956] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.296 [2024-07-26 01:58:32.148016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.296 [2024-07-26 01:58:32.148043] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.296 [2024-07-26 01:58:32.148057] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.296 [2024-07-26 01:58:32.148077] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.296 [2024-07-26 01:58:32.148123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.296 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.296 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:50.296 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:50.296 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:50.296 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.296 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.296 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.sj8rja32Au 00:22:50.297 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sj8rja32Au 00:22:50.297 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:50.555 [2024-07-26 01:58:32.516373] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.555 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:50.812 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:51.070 [2024-07-26 01:58:33.013696] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:51.070 [2024-07-26 01:58:33.013962] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.070 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:51.328 malloc0 00:22:51.328 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:51.585 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sj8rja32Au 00:22:51.843 [2024-07-26 01:58:33.767454] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:51.843 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2307203 00:22:51.843 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:51.843 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:51.843 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2307203 /var/tmp/bdevperf.sock 00:22:51.843 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2307203 ']' 00:22:51.843 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.843 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:51.843 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.843 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:51.843 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.843 [2024-07-26 01:58:33.830216] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:51.843 [2024-07-26 01:58:33.830299] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2307203 ] 00:22:52.102 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.102 [2024-07-26 01:58:33.893020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.102 [2024-07-26 01:58:33.983357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.102 01:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:52.102 01:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:52.102 01:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sj8rja32Au 00:22:52.360 01:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:52.618 [2024-07-26 01:58:34.583384] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:52.876 nvme0n1 00:22:52.876 01:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:52.876 Running I/O for 1 seconds... 00:22:53.811 00:22:53.811 Latency(us) 00:22:53.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.811 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:53.811 Verification LBA range: start 0x0 length 0x2000 00:22:53.811 nvme0n1 : 1.02 3129.82 12.23 0.00 0.00 40430.51 7039.05 61749.48 00:22:53.811 =================================================================================================================== 00:22:53.811 Total : 3129.82 12.23 0.00 0.00 40430.51 7039.05 61749.48 00:22:53.811 0 00:22:54.070 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 2307203 00:22:54.070 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2307203 ']' 00:22:54.070 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2307203 00:22:54.070 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:54.070 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:54.070 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2307203 00:22:54.070 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:54.070 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:54.070 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2307203' 00:22:54.070 killing process with pid 2307203 00:22:54.070 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2307203 00:22:54.070 Received shutdown signal, test time was about 1.000000 seconds 00:22:54.070 00:22:54.070 Latency(us) 00:22:54.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.070 =================================================================================================================== 00:22:54.070 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.070 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2307203 00:22:54.352 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 2306919 00:22:54.352 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2306919 ']' 00:22:54.352 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2306919 00:22:54.352 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:54.352 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:54.352 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2306919 00:22:54.352 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:54.352 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:54.352 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2306919' 00:22:54.352 killing process with pid 2306919 00:22:54.352 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2306919 00:22:54.352 [2024-07-26 01:58:36.112226] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:54.352 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2306919 00:22:54.626 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:22:54.626 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.626 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:54.626 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.626 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2307482 00:22:54.626 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:54.626 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2307482 00:22:54.626 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2307482 ']' 00:22:54.626 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.626 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.626 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.626 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.626 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.626 [2024-07-26 01:58:36.422271] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:54.626 [2024-07-26 01:58:36.422364] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.626 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.626 [2024-07-26 01:58:36.493567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.626 [2024-07-26 01:58:36.592881] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.626 [2024-07-26 01:58:36.592933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.626 [2024-07-26 01:58:36.592947] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.626 [2024-07-26 01:58:36.592959] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.626 [2024-07-26 01:58:36.592970] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.626 [2024-07-26 01:58:36.592997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.885 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:54.885 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:54.885 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:54.885 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:54.885 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.885 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.885 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:22:54.885 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.885 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.885 [2024-07-26 01:58:36.732962] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.885 malloc0 00:22:54.885 [2024-07-26 01:58:36.765493] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:54.885 [2024-07-26 01:58:36.787265] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.885 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.886 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2307628 00:22:54.886 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:54.886 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2307628 /var/tmp/bdevperf.sock 00:22:54.886 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2307628 ']' 00:22:54.886 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.886 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.886 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.886 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.886 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.886 [2024-07-26 01:58:36.853355] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:54.886 [2024-07-26 01:58:36.853460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2307628 ] 00:22:54.886 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.144 [2024-07-26 01:58:36.918636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.144 [2024-07-26 01:58:37.012899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.144 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:55.144 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:55.144 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sj8rja32Au 00:22:55.402 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:55.660 [2024-07-26 01:58:37.634004] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:55.918 nvme0n1 00:22:55.918 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:55.918 Running I/O for 1 seconds... 00:22:57.294 00:22:57.294 Latency(us) 00:22:57.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.294 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:57.294 Verification LBA range: start 0x0 length 0x2000 00:22:57.294 nvme0n1 : 1.03 3255.58 12.72 0.00 0.00 38832.06 6747.78 71846.87 00:22:57.294 =================================================================================================================== 00:22:57.294 Total : 3255.58 12.72 0.00 0.00 38832.06 6747.78 71846.87 00:22:57.294 0 00:22:57.294 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:57.294 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.294 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.294 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.294 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:57.294 "subsystems": [ 00:22:57.294 { 00:22:57.294 "subsystem": "keyring", 00:22:57.294 "config": [ 00:22:57.294 { 00:22:57.294 "method": "keyring_file_add_key", 00:22:57.294 "params": { 00:22:57.294 "name": "key0", 00:22:57.294 "path": "/tmp/tmp.sj8rja32Au" 00:22:57.294 } 00:22:57.294 } 00:22:57.294 ] 00:22:57.294 }, 00:22:57.294 { 00:22:57.294 "subsystem": "iobuf", 00:22:57.294 "config": [ 00:22:57.294 { 00:22:57.294 "method": "iobuf_set_options", 00:22:57.294 "params": { 00:22:57.294 "small_pool_count": 8192, 00:22:57.294 "large_pool_count": 1024, 00:22:57.294 "small_bufsize": 8192, 00:22:57.294 "large_bufsize": 135168 00:22:57.294 } 00:22:57.294 } 00:22:57.294 ] 00:22:57.294 }, 00:22:57.294 { 00:22:57.294 "subsystem": "sock", 00:22:57.294 "config": [ 00:22:57.294 { 00:22:57.294 "method": "sock_set_default_impl", 00:22:57.294 "params": { 00:22:57.294 "impl_name": "posix" 00:22:57.294 } 00:22:57.294 }, 00:22:57.294 { 00:22:57.294 "method": "sock_impl_set_options", 00:22:57.294 "params": { 00:22:57.294 "impl_name": "ssl", 00:22:57.294 "recv_buf_size": 4096, 00:22:57.294 "send_buf_size": 4096, 00:22:57.294 "enable_recv_pipe": true, 00:22:57.294 "enable_quickack": false, 00:22:57.294 "enable_placement_id": 0, 00:22:57.294 "enable_zerocopy_send_server": true, 00:22:57.294 "enable_zerocopy_send_client": false, 00:22:57.294 "zerocopy_threshold": 0, 00:22:57.294 "tls_version": 0, 00:22:57.294 "enable_ktls": false 00:22:57.294 } 00:22:57.294 }, 00:22:57.294 { 00:22:57.294 "method": "sock_impl_set_options", 00:22:57.294 "params": { 00:22:57.294 "impl_name": "posix", 00:22:57.294 "recv_buf_size": 2097152, 00:22:57.294 "send_buf_size": 2097152, 00:22:57.294 "enable_recv_pipe": true, 00:22:57.294 "enable_quickack": false, 00:22:57.294 "enable_placement_id": 0, 00:22:57.294 "enable_zerocopy_send_server": true, 00:22:57.294 "enable_zerocopy_send_client": false, 00:22:57.294 "zerocopy_threshold": 0, 00:22:57.294 "tls_version": 0, 00:22:57.294 "enable_ktls": false 00:22:57.294 } 00:22:57.294 } 00:22:57.294 ] 00:22:57.294 }, 00:22:57.294 { 00:22:57.294 "subsystem": "vmd", 00:22:57.294 "config": [] 00:22:57.294 }, 00:22:57.294 { 00:22:57.294 "subsystem": "accel", 00:22:57.294 "config": [ 00:22:57.294 { 00:22:57.294 "method": "accel_set_options", 00:22:57.294 "params": { 00:22:57.294 "small_cache_size": 128, 00:22:57.294 "large_cache_size": 16, 00:22:57.294 "task_count": 2048, 00:22:57.294 "sequence_count": 2048, 00:22:57.294 "buf_count": 2048 00:22:57.294 } 00:22:57.294 } 00:22:57.294 ] 00:22:57.294 }, 00:22:57.294 { 00:22:57.294 "subsystem": "bdev", 00:22:57.294 "config": [ 00:22:57.294 { 00:22:57.294 "method": "bdev_set_options", 00:22:57.294 "params": { 00:22:57.294 "bdev_io_pool_size": 65535, 00:22:57.294 "bdev_io_cache_size": 256, 00:22:57.294 "bdev_auto_examine": true, 00:22:57.294 "iobuf_small_cache_size": 128, 00:22:57.294 "iobuf_large_cache_size": 16 00:22:57.294 } 00:22:57.294 }, 00:22:57.294 { 00:22:57.294 "method": "bdev_raid_set_options", 00:22:57.294 "params": { 00:22:57.294 "process_window_size_kb": 1024, 00:22:57.294 "process_max_bandwidth_mb_sec": 0 00:22:57.294 } 00:22:57.294 }, 00:22:57.294 { 00:22:57.294 "method": "bdev_iscsi_set_options", 00:22:57.294 "params": { 00:22:57.294 "timeout_sec": 30 00:22:57.294 } 00:22:57.294 }, 00:22:57.294 { 00:22:57.294 "method": "bdev_nvme_set_options", 00:22:57.294 "params": { 00:22:57.294 "action_on_timeout": "none", 00:22:57.294 "timeout_us": 0, 00:22:57.294 "timeout_admin_us": 0, 00:22:57.294 "keep_alive_timeout_ms": 10000, 00:22:57.294 "arbitration_burst": 0, 00:22:57.294 "low_priority_weight": 0, 00:22:57.294 "medium_priority_weight": 0, 00:22:57.294 "high_priority_weight": 0, 00:22:57.294 "nvme_adminq_poll_period_us": 10000, 00:22:57.294 "nvme_ioq_poll_period_us": 0, 00:22:57.294 "io_queue_requests": 0, 00:22:57.294 "delay_cmd_submit": true, 00:22:57.294 "transport_retry_count": 4, 00:22:57.294 "bdev_retry_count": 3, 00:22:57.294 "transport_ack_timeout": 0, 00:22:57.294 "ctrlr_loss_timeout_sec": 0, 00:22:57.294 "reconnect_delay_sec": 0, 00:22:57.294 "fast_io_fail_timeout_sec": 0, 00:22:57.294 "disable_auto_failback": false, 00:22:57.294 "generate_uuids": false, 00:22:57.294 "transport_tos": 0, 00:22:57.294 "nvme_error_stat": false, 00:22:57.294 "rdma_srq_size": 0, 00:22:57.294 "io_path_stat": false, 00:22:57.294 "allow_accel_sequence": false, 00:22:57.294 "rdma_max_cq_size": 0, 00:22:57.294 "rdma_cm_event_timeout_ms": 0, 00:22:57.294 "dhchap_digests": [ 00:22:57.294 "sha256", 00:22:57.294 "sha384", 00:22:57.295 "sha512" 00:22:57.295 ], 00:22:57.295 "dhchap_dhgroups": [ 00:22:57.295 "null", 00:22:57.295 "ffdhe2048", 00:22:57.295 "ffdhe3072", 00:22:57.295 "ffdhe4096", 00:22:57.295 "ffdhe6144", 00:22:57.295 "ffdhe8192" 00:22:57.295 ] 00:22:57.295 } 00:22:57.295 }, 00:22:57.295 { 00:22:57.295 "method": "bdev_nvme_set_hotplug", 00:22:57.295 "params": { 00:22:57.295 "period_us": 100000, 00:22:57.295 "enable": false 00:22:57.295 } 00:22:57.295 }, 00:22:57.295 { 00:22:57.295 "method": "bdev_malloc_create", 00:22:57.295 "params": { 00:22:57.295 "name": "malloc0", 00:22:57.295 "num_blocks": 8192, 00:22:57.295 "block_size": 4096, 00:22:57.295 "physical_block_size": 4096, 00:22:57.295 "uuid": "13e7feb8-a4f7-40e1-b656-93e6c1d6a486", 00:22:57.295 "optimal_io_boundary": 0, 00:22:57.295 "md_size": 0, 00:22:57.295 "dif_type": 0, 00:22:57.295 "dif_is_head_of_md": false, 00:22:57.295 "dif_pi_format": 0 00:22:57.295 } 00:22:57.295 }, 00:22:57.295 { 00:22:57.295 "method": "bdev_wait_for_examine" 00:22:57.295 } 00:22:57.295 ] 00:22:57.295 }, 00:22:57.295 { 00:22:57.295 "subsystem": "nbd", 00:22:57.295 "config": [] 00:22:57.295 }, 00:22:57.295 { 00:22:57.295 "subsystem": "scheduler", 00:22:57.295 "config": [ 00:22:57.295 { 00:22:57.295 "method": "framework_set_scheduler", 00:22:57.295 "params": { 00:22:57.295 "name": "static" 00:22:57.295 } 00:22:57.295 } 00:22:57.295 ] 00:22:57.295 }, 00:22:57.295 { 00:22:57.295 "subsystem": "nvmf", 00:22:57.295 "config": [ 00:22:57.295 { 00:22:57.295 "method": "nvmf_set_config", 00:22:57.295 "params": { 00:22:57.295 "discovery_filter": "match_any", 00:22:57.295 "admin_cmd_passthru": { 00:22:57.295 "identify_ctrlr": false 00:22:57.295 } 00:22:57.295 } 00:22:57.295 }, 00:22:57.295 { 00:22:57.295 "method": "nvmf_set_max_subsystems", 00:22:57.295 "params": { 00:22:57.295 "max_subsystems": 1024 00:22:57.295 } 00:22:57.295 }, 00:22:57.295 { 00:22:57.295 "method": "nvmf_set_crdt", 00:22:57.295 "params": { 00:22:57.295 "crdt1": 0, 00:22:57.295 "crdt2": 0, 00:22:57.295 "crdt3": 0 00:22:57.295 } 00:22:57.295 }, 00:22:57.295 { 00:22:57.295 "method": "nvmf_create_transport", 00:22:57.295 "params": { 00:22:57.295 "trtype": "TCP", 00:22:57.295 "max_queue_depth": 128, 00:22:57.295 "max_io_qpairs_per_ctrlr": 127, 00:22:57.295 "in_capsule_data_size": 4096, 00:22:57.295 "max_io_size": 131072, 00:22:57.295 "io_unit_size": 131072, 00:22:57.295 "max_aq_depth": 128, 00:22:57.295 "num_shared_buffers": 511, 00:22:57.295 "buf_cache_size": 4294967295, 00:22:57.295 "dif_insert_or_strip": false, 00:22:57.295 "zcopy": false, 00:22:57.295 "c2h_success": false, 00:22:57.295 "sock_priority": 0, 00:22:57.295 "abort_timeout_sec": 1, 00:22:57.295 "ack_timeout": 0, 00:22:57.295 "data_wr_pool_size": 0 00:22:57.295 } 00:22:57.295 }, 00:22:57.295 { 00:22:57.295 "method": "nvmf_create_subsystem", 00:22:57.295 "params": { 00:22:57.295 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.295 "allow_any_host": false, 00:22:57.295 "serial_number": "00000000000000000000", 00:22:57.295 "model_number": "SPDK bdev Controller", 00:22:57.295 "max_namespaces": 32, 00:22:57.295 "min_cntlid": 1, 00:22:57.295 "max_cntlid": 65519, 00:22:57.295 "ana_reporting": false 00:22:57.295 } 00:22:57.295 }, 00:22:57.295 { 00:22:57.295 "method": "nvmf_subsystem_add_host", 00:22:57.295 "params": { 00:22:57.295 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.295 "host": "nqn.2016-06.io.spdk:host1", 00:22:57.295 "psk": "key0" 00:22:57.295 } 00:22:57.295 }, 00:22:57.295 { 00:22:57.295 "method": "nvmf_subsystem_add_ns", 00:22:57.295 "params": { 00:22:57.295 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.295 "namespace": { 00:22:57.295 "nsid": 1, 00:22:57.295 "bdev_name": "malloc0", 00:22:57.295 "nguid": "13E7FEB8A4F740E1B65693E6C1D6A486", 00:22:57.295 "uuid": "13e7feb8-a4f7-40e1-b656-93e6c1d6a486", 00:22:57.295 "no_auto_visible": false 00:22:57.295 } 00:22:57.295 } 00:22:57.295 }, 00:22:57.295 { 00:22:57.295 "method": "nvmf_subsystem_add_listener", 00:22:57.295 "params": { 00:22:57.295 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.295 "listen_address": { 00:22:57.295 "trtype": "TCP", 00:22:57.295 "adrfam": "IPv4", 00:22:57.295 "traddr": "10.0.0.2", 00:22:57.295 "trsvcid": "4420" 00:22:57.295 }, 00:22:57.295 "secure_channel": false, 00:22:57.295 "sock_impl": "ssl" 00:22:57.295 } 00:22:57.295 } 00:22:57.295 ] 00:22:57.295 } 00:22:57.295 ] 00:22:57.295 }' 00:22:57.295 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:57.554 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:57.554 "subsystems": [ 00:22:57.554 { 00:22:57.554 "subsystem": "keyring", 00:22:57.554 "config": [ 00:22:57.554 { 00:22:57.554 "method": "keyring_file_add_key", 00:22:57.554 "params": { 00:22:57.554 "name": "key0", 00:22:57.554 "path": "/tmp/tmp.sj8rja32Au" 00:22:57.554 } 00:22:57.554 } 00:22:57.554 ] 00:22:57.554 }, 00:22:57.554 { 00:22:57.554 "subsystem": "iobuf", 00:22:57.554 "config": [ 00:22:57.554 { 00:22:57.554 "method": "iobuf_set_options", 00:22:57.554 "params": { 00:22:57.554 "small_pool_count": 8192, 00:22:57.554 "large_pool_count": 1024, 00:22:57.554 "small_bufsize": 8192, 00:22:57.554 "large_bufsize": 135168 00:22:57.554 } 00:22:57.554 } 00:22:57.554 ] 00:22:57.554 }, 00:22:57.554 { 00:22:57.554 "subsystem": "sock", 00:22:57.554 "config": [ 00:22:57.554 { 00:22:57.554 "method": "sock_set_default_impl", 00:22:57.554 "params": { 00:22:57.554 "impl_name": "posix" 00:22:57.554 } 00:22:57.554 }, 00:22:57.554 { 00:22:57.554 "method": "sock_impl_set_options", 00:22:57.554 "params": { 00:22:57.554 "impl_name": "ssl", 00:22:57.554 "recv_buf_size": 4096, 00:22:57.554 "send_buf_size": 4096, 00:22:57.554 "enable_recv_pipe": true, 00:22:57.554 "enable_quickack": false, 00:22:57.554 "enable_placement_id": 0, 00:22:57.554 "enable_zerocopy_send_server": true, 00:22:57.554 "enable_zerocopy_send_client": false, 00:22:57.554 "zerocopy_threshold": 0, 00:22:57.554 "tls_version": 0, 00:22:57.554 "enable_ktls": false 00:22:57.554 } 00:22:57.554 }, 00:22:57.554 { 00:22:57.554 "method": "sock_impl_set_options", 00:22:57.554 "params": { 00:22:57.554 "impl_name": "posix", 00:22:57.554 "recv_buf_size": 2097152, 00:22:57.554 "send_buf_size": 2097152, 00:22:57.554 "enable_recv_pipe": true, 00:22:57.554 "enable_quickack": false, 00:22:57.554 "enable_placement_id": 0, 00:22:57.554 "enable_zerocopy_send_server": true, 00:22:57.554 "enable_zerocopy_send_client": false, 00:22:57.554 "zerocopy_threshold": 0, 00:22:57.554 "tls_version": 0, 00:22:57.554 "enable_ktls": false 00:22:57.554 } 00:22:57.554 } 00:22:57.554 ] 00:22:57.554 }, 00:22:57.554 { 00:22:57.554 "subsystem": "vmd", 00:22:57.554 "config": [] 00:22:57.554 }, 00:22:57.554 { 00:22:57.554 "subsystem": "accel", 00:22:57.554 "config": [ 00:22:57.554 { 00:22:57.554 "method": "accel_set_options", 00:22:57.554 "params": { 00:22:57.554 "small_cache_size": 128, 00:22:57.554 "large_cache_size": 16, 00:22:57.554 "task_count": 2048, 00:22:57.554 "sequence_count": 2048, 00:22:57.554 "buf_count": 2048 00:22:57.554 } 00:22:57.554 } 00:22:57.554 ] 00:22:57.554 }, 00:22:57.554 { 00:22:57.554 "subsystem": "bdev", 00:22:57.554 "config": [ 00:22:57.554 { 00:22:57.554 "method": "bdev_set_options", 00:22:57.554 "params": { 00:22:57.554 "bdev_io_pool_size": 65535, 00:22:57.554 "bdev_io_cache_size": 256, 00:22:57.554 "bdev_auto_examine": true, 00:22:57.554 "iobuf_small_cache_size": 128, 00:22:57.554 "iobuf_large_cache_size": 16 00:22:57.554 } 00:22:57.554 }, 00:22:57.554 { 00:22:57.554 "method": "bdev_raid_set_options", 00:22:57.554 "params": { 00:22:57.554 "process_window_size_kb": 1024, 00:22:57.554 "process_max_bandwidth_mb_sec": 0 00:22:57.554 } 00:22:57.554 }, 00:22:57.554 { 00:22:57.554 "method": "bdev_iscsi_set_options", 00:22:57.554 "params": { 00:22:57.554 "timeout_sec": 30 00:22:57.554 } 00:22:57.554 }, 00:22:57.554 { 00:22:57.554 "method": "bdev_nvme_set_options", 00:22:57.554 "params": { 00:22:57.554 "action_on_timeout": "none", 00:22:57.554 "timeout_us": 0, 00:22:57.554 "timeout_admin_us": 0, 00:22:57.554 "keep_alive_timeout_ms": 10000, 00:22:57.554 "arbitration_burst": 0, 00:22:57.554 "low_priority_weight": 0, 00:22:57.554 "medium_priority_weight": 0, 00:22:57.554 "high_priority_weight": 0, 00:22:57.554 "nvme_adminq_poll_period_us": 10000, 00:22:57.554 "nvme_ioq_poll_period_us": 0, 00:22:57.554 "io_queue_requests": 512, 00:22:57.554 "delay_cmd_submit": true, 00:22:57.554 "transport_retry_count": 4, 00:22:57.554 "bdev_retry_count": 3, 00:22:57.554 "transport_ack_timeout": 0, 00:22:57.554 "ctrlr_loss_timeout_sec": 0, 00:22:57.554 "reconnect_delay_sec": 0, 00:22:57.554 "fast_io_fail_timeout_sec": 0, 00:22:57.554 "disable_auto_failback": false, 00:22:57.554 "generate_uuids": false, 00:22:57.554 "transport_tos": 0, 00:22:57.554 "nvme_error_stat": false, 00:22:57.554 "rdma_srq_size": 0, 00:22:57.554 "io_path_stat": false, 00:22:57.554 "allow_accel_sequence": false, 00:22:57.554 "rdma_max_cq_size": 0, 00:22:57.554 "rdma_cm_event_timeout_ms": 0, 00:22:57.554 "dhchap_digests": [ 00:22:57.554 "sha256", 00:22:57.554 "sha384", 00:22:57.554 "sha512" 00:22:57.554 ], 00:22:57.554 "dhchap_dhgroups": [ 00:22:57.554 "null", 00:22:57.554 "ffdhe2048", 00:22:57.554 "ffdhe3072", 00:22:57.554 "ffdhe4096", 00:22:57.554 "ffdhe6144", 00:22:57.554 "ffdhe8192" 00:22:57.554 ] 00:22:57.554 } 00:22:57.554 }, 00:22:57.554 { 00:22:57.554 "method": "bdev_nvme_attach_controller", 00:22:57.554 "params": { 00:22:57.554 "name": "nvme0", 00:22:57.554 "trtype": "TCP", 00:22:57.554 "adrfam": "IPv4", 00:22:57.555 "traddr": "10.0.0.2", 00:22:57.555 "trsvcid": "4420", 00:22:57.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.555 "prchk_reftag": false, 00:22:57.555 "prchk_guard": false, 00:22:57.555 "ctrlr_loss_timeout_sec": 0, 00:22:57.555 "reconnect_delay_sec": 0, 00:22:57.555 "fast_io_fail_timeout_sec": 0, 00:22:57.555 "psk": "key0", 00:22:57.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.555 "hdgst": false, 00:22:57.555 "ddgst": false 00:22:57.555 } 00:22:57.555 }, 00:22:57.555 { 00:22:57.555 "method": "bdev_nvme_set_hotplug", 00:22:57.555 "params": { 00:22:57.555 "period_us": 100000, 00:22:57.555 "enable": false 00:22:57.555 } 00:22:57.555 }, 00:22:57.555 { 00:22:57.555 "method": "bdev_enable_histogram", 00:22:57.555 "params": { 00:22:57.555 "name": "nvme0n1", 00:22:57.555 "enable": true 00:22:57.555 } 00:22:57.555 }, 00:22:57.555 { 00:22:57.555 "method": "bdev_wait_for_examine" 00:22:57.555 } 00:22:57.555 ] 00:22:57.555 }, 00:22:57.555 { 00:22:57.555 "subsystem": "nbd", 00:22:57.555 "config": [] 00:22:57.555 } 00:22:57.555 ] 00:22:57.555 }' 00:22:57.555 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 2307628 00:22:57.555 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2307628 ']' 00:22:57.555 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2307628 00:22:57.555 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:57.555 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:57.555 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2307628 00:22:57.555 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:57.555 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:57.555 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2307628' 00:22:57.555 killing process with pid 2307628 00:22:57.555 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2307628 00:22:57.555 Received shutdown signal, test time was about 1.000000 seconds 00:22:57.555 00:22:57.555 Latency(us) 00:22:57.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.555 =================================================================================================================== 00:22:57.555 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:57.555 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2307628 00:22:57.813 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 2307482 00:22:57.813 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2307482 ']' 00:22:57.813 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2307482 00:22:57.813 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:57.813 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:57.813 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2307482 00:22:57.813 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:57.813 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:57.813 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2307482' 00:22:57.813 killing process with pid 2307482 00:22:57.813 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2307482 00:22:57.813 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2307482 00:22:58.071 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:58.071 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:58.071 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:58.071 "subsystems": [ 00:22:58.071 { 00:22:58.071 "subsystem": "keyring", 00:22:58.071 "config": [ 00:22:58.071 { 00:22:58.071 "method": "keyring_file_add_key", 00:22:58.071 "params": { 00:22:58.071 "name": "key0", 00:22:58.071 "path": "/tmp/tmp.sj8rja32Au" 00:22:58.071 } 00:22:58.071 } 00:22:58.071 ] 00:22:58.071 }, 00:22:58.071 { 00:22:58.071 "subsystem": "iobuf", 00:22:58.071 "config": [ 00:22:58.071 { 00:22:58.071 "method": "iobuf_set_options", 00:22:58.071 "params": { 00:22:58.071 "small_pool_count": 8192, 00:22:58.071 "large_pool_count": 1024, 00:22:58.071 "small_bufsize": 8192, 00:22:58.071 "large_bufsize": 135168 00:22:58.071 } 00:22:58.071 } 00:22:58.071 ] 00:22:58.071 }, 00:22:58.071 { 00:22:58.071 "subsystem": "sock", 00:22:58.071 "config": [ 00:22:58.071 { 00:22:58.071 "method": "sock_set_default_impl", 00:22:58.071 "params": { 00:22:58.071 "impl_name": "posix" 00:22:58.071 } 00:22:58.071 }, 00:22:58.071 { 00:22:58.071 "method": "sock_impl_set_options", 00:22:58.071 "params": { 00:22:58.071 "impl_name": "ssl", 00:22:58.071 "recv_buf_size": 4096, 00:22:58.071 "send_buf_size": 4096, 00:22:58.071 "enable_recv_pipe": true, 00:22:58.071 "enable_quickack": false, 00:22:58.071 "enable_placement_id": 0, 00:22:58.071 "enable_zerocopy_send_server": true, 00:22:58.071 "enable_zerocopy_send_client": false, 00:22:58.071 "zerocopy_threshold": 0, 00:22:58.071 "tls_version": 0, 00:22:58.071 "enable_ktls": false 00:22:58.071 } 00:22:58.071 }, 00:22:58.071 { 00:22:58.071 "method": "sock_impl_set_options", 00:22:58.071 "params": { 00:22:58.071 "impl_name": "posix", 00:22:58.071 "recv_buf_size": 2097152, 00:22:58.071 "send_buf_size": 2097152, 00:22:58.071 "enable_recv_pipe": true, 00:22:58.071 "enable_quickack": false, 00:22:58.071 "enable_placement_id": 0, 00:22:58.071 "enable_zerocopy_send_server": true, 00:22:58.071 "enable_zerocopy_send_client": false, 00:22:58.071 "zerocopy_threshold": 0, 00:22:58.071 "tls_version": 0, 00:22:58.071 "enable_ktls": false 00:22:58.071 } 00:22:58.071 } 00:22:58.071 ] 00:22:58.071 }, 00:22:58.071 { 00:22:58.071 "subsystem": "vmd", 00:22:58.071 "config": [] 00:22:58.071 }, 00:22:58.071 { 00:22:58.071 "subsystem": "accel", 00:22:58.071 "config": [ 00:22:58.071 { 00:22:58.071 "method": "accel_set_options", 00:22:58.071 "params": { 00:22:58.071 "small_cache_size": 128, 00:22:58.071 "large_cache_size": 16, 00:22:58.071 "task_count": 2048, 00:22:58.071 "sequence_count": 2048, 00:22:58.071 "buf_count": 2048 00:22:58.072 } 00:22:58.072 } 00:22:58.072 ] 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "subsystem": "bdev", 00:22:58.072 "config": [ 00:22:58.072 { 00:22:58.072 "method": "bdev_set_options", 00:22:58.072 "params": { 00:22:58.072 "bdev_io_pool_size": 65535, 00:22:58.072 "bdev_io_cache_size": 256, 00:22:58.072 "bdev_auto_examine": true, 00:22:58.072 "iobuf_small_cache_size": 128, 00:22:58.072 "iobuf_large_cache_size": 16 00:22:58.072 } 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "method": "bdev_raid_set_options", 00:22:58.072 "params": { 00:22:58.072 "process_window_size_kb": 1024, 00:22:58.072 "process_max_bandwidth_mb_sec": 0 00:22:58.072 } 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "method": "bdev_iscsi_set_options", 00:22:58.072 "params": { 00:22:58.072 "timeout_sec": 30 00:22:58.072 } 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "method": "bdev_nvme_set_options", 00:22:58.072 "params": { 00:22:58.072 "action_on_timeout": "none", 00:22:58.072 "timeout_us": 0, 00:22:58.072 "timeout_admin_us": 0, 00:22:58.072 "keep_alive_timeout_ms": 10000, 00:22:58.072 "arbitration_burst": 0, 00:22:58.072 "low_priority_weight": 0, 00:22:58.072 "medium_priority_weight": 0, 00:22:58.072 "high_priority_weight": 0, 00:22:58.072 "nvme_adminq_poll_period_us": 10000, 00:22:58.072 "nvme_ioq_poll_period_us": 0, 00:22:58.072 "io_queue_requests": 0, 00:22:58.072 "delay_cmd_submit": true, 00:22:58.072 "transport_retry_count": 4, 00:22:58.072 "bdev_retry_count": 3, 00:22:58.072 "transport_ack_timeout": 0, 00:22:58.072 "ctrlr_loss_timeout_sec": 0, 00:22:58.072 "reconnect_delay_sec": 0, 00:22:58.072 "fast_io_fail_timeout_sec": 0, 00:22:58.072 "disable_auto_failback": false, 00:22:58.072 "generate_uuids": false, 00:22:58.072 "transport_tos": 0, 00:22:58.072 "nvme_error_stat": false, 00:22:58.072 "rdma_srq_size": 0, 00:22:58.072 "io_path_stat": false, 00:22:58.072 "allow_accel_sequence": false, 00:22:58.072 "rdma_max_cq_size": 0, 00:22:58.072 "rdma_cm_event_timeout_ms": 0, 00:22:58.072 "dhchap_digests": [ 00:22:58.072 "sha256", 00:22:58.072 "sha384", 00:22:58.072 "sha512" 00:22:58.072 ], 00:22:58.072 "dhchap_dhgroups": [ 00:22:58.072 "null", 00:22:58.072 "ffdhe2048", 00:22:58.072 "ffdhe3072", 00:22:58.072 "ffdhe4096", 00:22:58.072 "ffdhe6144", 00:22:58.072 "ffdhe8192" 00:22:58.072 ] 00:22:58.072 } 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "method": "bdev_nvme_set_hotplug", 00:22:58.072 "params": { 00:22:58.072 "period_us": 100000, 00:22:58.072 "enable": false 00:22:58.072 } 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "method": "bdev_malloc_create", 00:22:58.072 "params": { 00:22:58.072 "name": "malloc0", 00:22:58.072 "num_blocks": 8192, 00:22:58.072 "block_size": 4096, 00:22:58.072 "physical_block_size": 4096, 00:22:58.072 "uuid": "13e7feb8-a4f7-40e1-b656-93e6c1d6a486", 00:22:58.072 "optimal_io_boundary": 0, 00:22:58.072 "md_size": 0, 00:22:58.072 "dif_type": 0, 00:22:58.072 "dif_is_head_of_md": false, 00:22:58.072 "dif_pi_format": 0 00:22:58.072 } 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "method": "bdev_wait_for_examine" 00:22:58.072 } 00:22:58.072 ] 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "subsystem": "nbd", 00:22:58.072 "config": [] 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "subsystem": "scheduler", 00:22:58.072 "config": [ 00:22:58.072 { 00:22:58.072 "method": "framework_set_scheduler", 00:22:58.072 "params": { 00:22:58.072 "name": "static" 00:22:58.072 } 00:22:58.072 } 00:22:58.072 ] 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "subsystem": "nvmf", 00:22:58.072 "config": [ 00:22:58.072 { 00:22:58.072 "method": "nvmf_set_config", 00:22:58.072 "params": { 00:22:58.072 "discovery_filter": "match_any", 00:22:58.072 "admin_cmd_passthru": { 00:22:58.072 "identify_ctrlr": false 00:22:58.072 } 00:22:58.072 } 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "method": "nvmf_set_max_subsystems", 00:22:58.072 "params": { 00:22:58.072 "max_subsystems": 1024 00:22:58.072 } 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "method": "nvmf_set_crdt", 00:22:58.072 "params": { 00:22:58.072 "crdt1": 0, 00:22:58.072 "crdt2": 0, 00:22:58.072 "crdt3": 0 00:22:58.072 } 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "method": "nvmf_create_transport", 00:22:58.072 "params": { 00:22:58.072 "trtype": "TCP", 00:22:58.072 "max_queue_depth": 128, 00:22:58.072 "max_io_qpairs_per_ctrlr": 127, 00:22:58.072 "in_capsule_data_size": 4096, 00:22:58.072 "max_io_size": 131072, 00:22:58.072 "io_unit_size": 131072, 00:22:58.072 "max_aq_depth": 128, 00:22:58.072 "num_shared_buffers": 511, 00:22:58.072 "buf_cache_size": 4294967295, 00:22:58.072 "dif_insert_or_strip": false, 00:22:58.072 "zcopy": false, 00:22:58.072 "c2h_success": false, 00:22:58.072 "sock_priority": 0, 00:22:58.072 "abort_timeout_sec": 1, 00:22:58.072 "ack_timeout": 0, 00:22:58.072 "data_wr_pool_size": 0 00:22:58.072 } 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "method": "nvmf_create_subsystem", 00:22:58.072 "params": { 00:22:58.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.072 "allow_any_host": false, 00:22:58.072 "serial_number": "00000000000000000000", 00:22:58.072 "model_number": "SPDK bdev Controller", 00:22:58.072 "max_namespaces": 32, 00:22:58.072 "min_cntlid": 1, 00:22:58.072 "max_cntlid": 65519, 00:22:58.072 "ana_reporting": false 00:22:58.072 } 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "method": "nvmf_subsystem_add_host", 00:22:58.072 "params": { 00:22:58.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.072 "host": "nqn.2016-06.io.spdk:host1", 00:22:58.072 "psk": "key0" 00:22:58.072 } 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "method": "nvmf_subsystem_add_ns", 00:22:58.072 "params": { 00:22:58.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.072 "namespace": { 00:22:58.072 "nsid": 1, 00:22:58.072 "bdev_name": "malloc0", 00:22:58.072 "nguid": "13E7FEB8A4F740E1B65693E6C1D6A486", 00:22:58.072 "uuid": "13e7feb8-a4f7-40e1-b656-93e6c1d6a486", 00:22:58.072 "no_auto_visible": false 00:22:58.072 } 00:22:58.072 } 00:22:58.072 }, 00:22:58.072 { 00:22:58.072 "method": "nvmf_subsystem_add_listener", 00:22:58.072 "params": { 00:22:58.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.072 "listen_address": { 00:22:58.072 "trtype": "TCP", 00:22:58.072 "adrfam": "IPv4", 00:22:58.072 "traddr": "10.0.0.2", 00:22:58.072 "trsvcid": "4420" 00:22:58.072 }, 00:22:58.072 "secure_channel": false, 00:22:58.072 "sock_impl": "ssl" 00:22:58.072 } 00:22:58.072 } 00:22:58.072 ] 00:22:58.072 } 00:22:58.072 ] 00:22:58.072 }' 00:22:58.072 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.072 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.072 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2307921 00:22:58.072 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:58.072 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2307921 00:22:58.072 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2307921 ']' 00:22:58.072 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.072 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.072 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.072 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.072 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.072 [2024-07-26 01:58:39.916362] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:58.072 [2024-07-26 01:58:39.916453] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.072 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.072 [2024-07-26 01:58:39.983480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.331 [2024-07-26 01:58:40.084352] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.331 [2024-07-26 01:58:40.084420] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.331 [2024-07-26 01:58:40.084436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.331 [2024-07-26 01:58:40.084461] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.331 [2024-07-26 01:58:40.084472] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.331 [2024-07-26 01:58:40.084550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.331 [2024-07-26 01:58:40.319788] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.589 [2024-07-26 01:58:40.360897] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:58.589 [2024-07-26 01:58:40.361164] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.154 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.155 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:59.155 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:59.155 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:59.155 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.155 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.155 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2308069 00:22:59.155 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2308069 /var/tmp/bdevperf.sock 00:22:59.155 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2308069 ']' 00:22:59.155 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.155 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:59.155 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:59.155 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.155 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:59.155 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.155 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:59.155 "subsystems": [ 00:22:59.155 { 00:22:59.155 "subsystem": "keyring", 00:22:59.155 "config": [ 00:22:59.155 { 00:22:59.155 "method": "keyring_file_add_key", 00:22:59.155 "params": { 00:22:59.155 "name": "key0", 00:22:59.155 "path": "/tmp/tmp.sj8rja32Au" 00:22:59.155 } 00:22:59.155 } 00:22:59.155 ] 00:22:59.155 }, 00:22:59.155 { 00:22:59.155 "subsystem": "iobuf", 00:22:59.155 "config": [ 00:22:59.155 { 00:22:59.155 "method": "iobuf_set_options", 00:22:59.155 "params": { 00:22:59.155 "small_pool_count": 8192, 00:22:59.155 "large_pool_count": 1024, 00:22:59.155 "small_bufsize": 8192, 00:22:59.155 "large_bufsize": 135168 00:22:59.155 } 00:22:59.155 } 00:22:59.155 ] 00:22:59.155 }, 00:22:59.155 { 00:22:59.155 "subsystem": "sock", 00:22:59.155 "config": [ 00:22:59.155 { 00:22:59.155 "method": "sock_set_default_impl", 00:22:59.155 "params": { 00:22:59.155 "impl_name": "posix" 00:22:59.155 } 00:22:59.155 }, 00:22:59.155 { 00:22:59.155 "method": "sock_impl_set_options", 00:22:59.155 "params": { 00:22:59.155 "impl_name": "ssl", 00:22:59.155 "recv_buf_size": 4096, 00:22:59.155 "send_buf_size": 4096, 00:22:59.155 "enable_recv_pipe": true, 00:22:59.155 "enable_quickack": false, 00:22:59.155 "enable_placement_id": 0, 00:22:59.155 "enable_zerocopy_send_server": true, 00:22:59.155 "enable_zerocopy_send_client": false, 00:22:59.155 "zerocopy_threshold": 0, 00:22:59.155 "tls_version": 0, 00:22:59.155 "enable_ktls": false 00:22:59.155 } 00:22:59.155 }, 00:22:59.155 { 00:22:59.155 "method": "sock_impl_set_options", 00:22:59.155 "params": { 00:22:59.155 "impl_name": "posix", 00:22:59.155 "recv_buf_size": 2097152, 00:22:59.155 "send_buf_size": 2097152, 00:22:59.155 "enable_recv_pipe": true, 00:22:59.155 "enable_quickack": false, 00:22:59.155 "enable_placement_id": 0, 00:22:59.155 "enable_zerocopy_send_server": true, 00:22:59.155 "enable_zerocopy_send_client": false, 00:22:59.155 "zerocopy_threshold": 0, 00:22:59.155 "tls_version": 0, 00:22:59.155 "enable_ktls": false 00:22:59.155 } 00:22:59.155 } 00:22:59.155 ] 00:22:59.155 }, 00:22:59.155 { 00:22:59.155 "subsystem": "vmd", 00:22:59.155 "config": [] 00:22:59.155 }, 00:22:59.155 { 00:22:59.155 "subsystem": "accel", 00:22:59.155 "config": [ 00:22:59.155 { 00:22:59.155 "method": "accel_set_options", 00:22:59.155 "params": { 00:22:59.155 "small_cache_size": 128, 00:22:59.155 "large_cache_size": 16, 00:22:59.155 "task_count": 2048, 00:22:59.155 "sequence_count": 2048, 00:22:59.155 "buf_count": 2048 00:22:59.155 } 00:22:59.155 } 00:22:59.155 ] 00:22:59.155 }, 00:22:59.155 { 00:22:59.155 "subsystem": "bdev", 00:22:59.155 "config": [ 00:22:59.155 { 00:22:59.155 "method": "bdev_set_options", 00:22:59.155 "params": { 00:22:59.155 "bdev_io_pool_size": 65535, 00:22:59.155 "bdev_io_cache_size": 256, 00:22:59.155 "bdev_auto_examine": true, 00:22:59.155 "iobuf_small_cache_size": 128, 00:22:59.155 "iobuf_large_cache_size": 16 00:22:59.155 } 00:22:59.155 }, 00:22:59.155 { 00:22:59.155 "method": "bdev_raid_set_options", 00:22:59.155 "params": { 00:22:59.155 "process_window_size_kb": 1024, 00:22:59.155 "process_max_bandwidth_mb_sec": 0 00:22:59.155 } 00:22:59.155 }, 00:22:59.155 { 00:22:59.155 "method": "bdev_iscsi_set_options", 00:22:59.155 "params": { 00:22:59.155 "timeout_sec": 30 00:22:59.155 } 00:22:59.155 }, 00:22:59.155 { 00:22:59.155 "method": "bdev_nvme_set_options", 00:22:59.155 "params": { 00:22:59.155 "action_on_timeout": "none", 00:22:59.155 "timeout_us": 0, 00:22:59.155 "timeout_admin_us": 0, 00:22:59.155 "keep_alive_timeout_ms": 10000, 00:22:59.155 "arbitration_burst": 0, 00:22:59.155 "low_priority_weight": 0, 00:22:59.155 "medium_priority_weight": 0, 00:22:59.155 "high_priority_weight": 0, 00:22:59.155 "nvme_adminq_poll_period_us": 10000, 00:22:59.155 "nvme_ioq_poll_period_us": 0, 00:22:59.155 "io_queue_requests": 512, 00:22:59.155 "delay_cmd_submit": true, 00:22:59.155 "transport_retry_count": 4, 00:22:59.155 "bdev_retry_count": 3, 00:22:59.155 "transport_ack_timeout": 0, 00:22:59.155 "ctrlr_loss_timeout_sec": 0, 00:22:59.155 "reconnect_delay_sec": 0, 00:22:59.155 "fast_io_fail_timeout_sec": 0, 00:22:59.155 "disable_auto_failback": false, 00:22:59.155 "generate_uuids": false, 00:22:59.155 "transport_tos": 0, 00:22:59.155 "nvme_error_stat": false, 00:22:59.155 "rdma_srq_size": 0, 00:22:59.155 "io_path_stat": false, 00:22:59.155 "allow_accel_sequence": false, 00:22:59.155 "rdma_max_cq_size": 0, 00:22:59.155 "rdma_cm_event_timeout_ms": 0, 00:22:59.155 "dhchap_digests": [ 00:22:59.155 "sha256", 00:22:59.155 "sha384", 00:22:59.155 "sha512" 00:22:59.155 ], 00:22:59.155 "dhchap_dhgroups": [ 00:22:59.155 "null", 00:22:59.155 "ffdhe2048", 00:22:59.155 "ffdhe3072", 00:22:59.155 "ffdhe4096", 00:22:59.155 "ffdhe6144", 00:22:59.155 "ffdhe8192" 00:22:59.155 ] 00:22:59.155 } 00:22:59.155 }, 00:22:59.155 { 00:22:59.155 "method": "bdev_nvme_attach_controller", 00:22:59.155 "params": { 00:22:59.155 "name": "nvme0", 00:22:59.155 "trtype": "TCP", 00:22:59.155 "adrfam": "IPv4", 00:22:59.155 "traddr": "10.0.0.2", 00:22:59.155 "trsvcid": "4420", 00:22:59.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.155 "prchk_reftag": false, 00:22:59.155 "prchk_guard": false, 00:22:59.155 "ctrlr_loss_timeout_sec": 0, 00:22:59.155 "reconnect_delay_sec": 0, 00:22:59.155 "fast_io_fail_timeout_sec": 0, 00:22:59.155 "psk": "key0", 00:22:59.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:59.155 "hdgst": false, 00:22:59.155 "ddgst": false 00:22:59.155 } 00:22:59.155 }, 00:22:59.155 { 00:22:59.155 "method": "bdev_nvme_set_hotplug", 00:22:59.155 "params": { 00:22:59.155 "period_us": 100000, 00:22:59.155 "enable": false 00:22:59.155 } 00:22:59.155 }, 00:22:59.155 { 00:22:59.155 "method": "bdev_enable_histogram", 00:22:59.156 "params": { 00:22:59.156 "name": "nvme0n1", 00:22:59.156 "enable": true 00:22:59.156 } 00:22:59.156 }, 00:22:59.156 { 00:22:59.156 "method": "bdev_wait_for_examine" 00:22:59.156 } 00:22:59.156 ] 00:22:59.156 }, 00:22:59.156 { 00:22:59.156 "subsystem": "nbd", 00:22:59.156 "config": [] 00:22:59.156 } 00:22:59.156 ] 00:22:59.156 }' 00:22:59.156 [2024-07-26 01:58:40.973625] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:22:59.156 [2024-07-26 01:58:40.973696] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2308069 ] 00:22:59.156 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.156 [2024-07-26 01:58:41.036313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.156 [2024-07-26 01:58:41.127426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.413 [2024-07-26 01:58:41.307173] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.978 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.978 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:59.978 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:59.978 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:23:00.236 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.236 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:00.493 Running I/O for 1 seconds... 00:23:01.426 00:23:01.426 Latency(us) 00:23:01.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.426 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.426 Verification LBA range: start 0x0 length 0x2000 00:23:01.426 nvme0n1 : 1.02 3404.29 13.30 0.00 0.00 37160.69 10097.40 39807.05 00:23:01.426 =================================================================================================================== 00:23:01.426 Total : 3404.29 13.30 0.00 0.00 37160.69 10097.40 39807.05 00:23:01.426 0 00:23:01.426 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:23:01.426 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:23:01.426 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:01.426 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:23:01.426 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:23:01.426 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:01.426 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:01.426 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:01.426 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:01.426 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:01.426 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:01.426 nvmf_trace.0 00:23:01.683 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:23:01.683 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2308069 00:23:01.683 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2308069 ']' 00:23:01.683 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2308069 00:23:01.683 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:01.683 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:01.683 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2308069 00:23:01.683 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:01.683 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:01.683 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2308069' 00:23:01.683 killing process with pid 2308069 00:23:01.683 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2308069 00:23:01.683 Received shutdown signal, test time was about 1.000000 seconds 00:23:01.683 00:23:01.683 Latency(us) 00:23:01.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.683 =================================================================================================================== 00:23:01.683 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.683 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2308069 00:23:01.683 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:01.683 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:01.683 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:01.939 rmmod nvme_tcp 00:23:01.939 rmmod nvme_fabrics 00:23:01.939 rmmod nvme_keyring 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2307921 ']' 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2307921 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2307921 ']' 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2307921 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2307921 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:01.939 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2307921' 00:23:01.939 killing process with pid 2307921 00:23:01.940 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2307921 00:23:01.940 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2307921 00:23:02.197 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:02.197 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:02.197 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:02.197 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:02.197 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:02.197 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.197 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.197 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.095 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:04.095 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.QmLtVZwaRO /tmp/tmp.KZAnu2YuA8 /tmp/tmp.sj8rja32Au 00:23:04.095 00:23:04.095 real 1m19.209s 00:23:04.095 user 2m10.034s 00:23:04.095 sys 0m24.739s 00:23:04.095 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:04.095 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.095 ************************************ 00:23:04.095 END TEST nvmf_tls 00:23:04.095 ************************************ 00:23:04.095 01:58:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:04.095 01:58:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:04.095 01:58:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:04.095 01:58:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:04.095 ************************************ 00:23:04.095 START TEST nvmf_fips 00:23:04.095 ************************************ 00:23:04.095 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:04.354 * Looking for test storage... 00:23:04.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:04.354 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:23:04.355 Error setting digest 00:23:04.355 0022B013BF7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:04.355 0022B013BF7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:04.355 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:06.884 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:06.884 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:06.884 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:06.884 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:06.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:23:06.884 00:23:06.884 --- 10.0.0.2 ping statistics --- 00:23:06.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.884 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:06.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:23:06.884 00:23:06.884 --- 10.0.0.1 ping statistics --- 00:23:06.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.884 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2310424 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2310424 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2310424 ']' 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:06.884 [2024-07-26 01:58:48.599927] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:23:06.884 [2024-07-26 01:58:48.600018] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.884 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.884 [2024-07-26 01:58:48.670513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.884 [2024-07-26 01:58:48.763576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.884 [2024-07-26 01:58:48.763642] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.884 [2024-07-26 01:58:48.763659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.884 [2024-07-26 01:58:48.763672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.884 [2024-07-26 01:58:48.763684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.884 [2024-07-26 01:58:48.763717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:06.884 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:07.142 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.142 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:07.142 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:07.142 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:07.142 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:07.142 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:07.142 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:07.142 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:07.142 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:07.422 [2024-07-26 01:58:49.159195] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.422 [2024-07-26 01:58:49.175168] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:07.422 [2024-07-26 01:58:49.175418] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.422 [2024-07-26 01:58:49.206660] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:07.422 malloc0 00:23:07.422 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.422 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2310573 00:23:07.422 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.422 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2310573 /var/tmp/bdevperf.sock 00:23:07.422 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2310573 ']' 00:23:07.422 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.422 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:07.422 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.422 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:07.422 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:07.422 [2024-07-26 01:58:49.294463] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:23:07.422 [2024-07-26 01:58:49.294533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310573 ] 00:23:07.422 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.422 [2024-07-26 01:58:49.351495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.680 [2024-07-26 01:58:49.436442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.680 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:07.680 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:07.680 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:07.937 [2024-07-26 01:58:49.765520] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.937 [2024-07-26 01:58:49.765643] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:07.937 TLSTESTn1 00:23:07.937 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:08.195 Running I/O for 10 seconds... 00:23:18.158 00:23:18.158 Latency(us) 00:23:18.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.158 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:18.158 Verification LBA range: start 0x0 length 0x2000 00:23:18.158 TLSTESTn1 : 10.04 3504.28 13.69 0.00 0.00 36441.89 6165.24 43108.12 00:23:18.158 =================================================================================================================== 00:23:18.158 Total : 3504.28 13.69 0.00 0.00 36441.89 6165.24 43108.12 00:23:18.158 0 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:18.158 nvmf_trace.0 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2310573 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2310573 ']' 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2310573 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2310573 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2310573' 00:23:18.158 killing process with pid 2310573 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2310573 00:23:18.158 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.158 00:23:18.158 Latency(us) 00:23:18.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.158 =================================================================================================================== 00:23:18.158 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.158 [2024-07-26 01:59:00.151219] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:18.158 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2310573 00:23:18.416 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:18.416 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:18.416 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:18.416 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:18.416 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:18.416 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:18.416 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:18.416 rmmod nvme_tcp 00:23:18.416 rmmod nvme_fabrics 00:23:18.416 rmmod nvme_keyring 00:23:18.674 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:18.674 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:18.674 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:18.674 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2310424 ']' 00:23:18.674 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2310424 00:23:18.674 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2310424 ']' 00:23:18.674 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2310424 00:23:18.674 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:18.674 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.674 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2310424 00:23:18.674 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:18.674 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:18.674 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2310424' 00:23:18.674 killing process with pid 2310424 00:23:18.674 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2310424 00:23:18.674 [2024-07-26 01:59:00.467449] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:18.674 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2310424 00:23:18.932 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:18.932 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:18.932 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:18.932 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:18.932 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:18.932 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.932 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.932 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.835 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:20.835 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:20.835 00:23:20.835 real 0m16.654s 00:23:20.835 user 0m21.497s 00:23:20.835 sys 0m5.419s 00:23:20.835 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:20.835 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:20.835 ************************************ 00:23:20.835 END TEST nvmf_fips 00:23:20.835 ************************************ 00:23:20.835 01:59:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:23:20.835 01:59:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:20.835 01:59:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:20.835 01:59:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:20.835 01:59:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:20.835 ************************************ 00:23:20.835 START TEST nvmf_fuzz 00:23:20.835 ************************************ 00:23:20.835 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:21.094 * Looking for test storage... 00:23:21.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:21.094 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:22.994 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:22.994 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:22.994 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:22.994 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:22.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:23:22.994 00:23:22.994 --- 10.0.0.2 ping statistics --- 00:23:22.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.994 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:23:22.994 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:23:22.994 00:23:22.994 --- 10.0.0.1 ping statistics --- 00:23:22.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.994 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2313711 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2313711 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2313711 ']' 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:22.995 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:23.561 Malloc0 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:23.561 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:55.622 Fuzzing completed. Shutting down the fuzz application 00:23:55.622 00:23:55.622 Dumping successful admin opcodes: 00:23:55.622 8, 9, 10, 24, 00:23:55.622 Dumping successful io opcodes: 00:23:55.622 0, 9, 00:23:55.622 NS: 0x200003aeff00 I/O qp, Total commands completed: 454671, total successful commands: 2638, random_seed: 1722220544 00:23:55.622 NS: 0x200003aeff00 admin qp, Total commands completed: 55344, total successful commands: 442, random_seed: 1317898176 00:23:55.622 01:59:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:55.622 Fuzzing completed. Shutting down the fuzz application 00:23:55.622 00:23:55.622 Dumping successful admin opcodes: 00:23:55.622 24, 00:23:55.622 Dumping successful io opcodes: 00:23:55.622 00:23:55.622 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1515630591 00:23:55.622 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1515757663 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:55.622 rmmod nvme_tcp 00:23:55.622 rmmod nvme_fabrics 00:23:55.622 rmmod nvme_keyring 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2313711 ']' 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2313711 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2313711 ']' 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 2313711 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2313711 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2313711' 00:23:55.622 killing process with pid 2313711 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 2313711 00:23:55.622 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 2313711 00:23:55.881 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:55.881 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:55.881 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:55.881 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:55.881 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:55.881 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.881 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.881 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:58.414 00:23:58.414 real 0m37.032s 00:23:58.414 user 0m50.719s 00:23:58.414 sys 0m15.605s 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:58.414 ************************************ 00:23:58.414 END TEST nvmf_fuzz 00:23:58.414 ************************************ 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:58.414 ************************************ 00:23:58.414 START TEST nvmf_multiconnection 00:23:58.414 ************************************ 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:58.414 * Looking for test storage... 00:23:58.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:23:58.414 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.319 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.319 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:00.320 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:00.320 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:00.320 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:00.320 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:00.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:24:00.320 00:24:00.320 --- 10.0.0.2 ping statistics --- 00:24:00.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.320 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:24:00.320 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:24:00.320 00:24:00.320 --- 10.0.0.1 ping statistics --- 00:24:00.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.320 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:24:00.320 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.320 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:00.320 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2319422 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2319422 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 2319422 ']' 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:00.321 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.321 [2024-07-26 01:59:42.071329] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:24:00.321 [2024-07-26 01:59:42.071413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.321 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.321 [2024-07-26 01:59:42.142865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:00.321 [2024-07-26 01:59:42.236691] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.321 [2024-07-26 01:59:42.236751] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.321 [2024-07-26 01:59:42.236778] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.321 [2024-07-26 01:59:42.236792] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.321 [2024-07-26 01:59:42.236803] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.321 [2024-07-26 01:59:42.238086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.321 [2024-07-26 01:59:42.238130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.321 [2024-07-26 01:59:42.238216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:00.321 [2024-07-26 01:59:42.238219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.580 [2024-07-26 01:59:42.378393] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.580 Malloc1 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.580 [2024-07-26 01:59:42.433484] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.580 Malloc2 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.580 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.581 Malloc3 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.581 Malloc4 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.581 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.841 Malloc5 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.841 Malloc6 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.841 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.842 Malloc7 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.842 Malloc8 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.842 Malloc9 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.842 Malloc10 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.842 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.102 Malloc11 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:01.102 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.103 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.103 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.103 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:01.103 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.103 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.103 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.103 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:01.103 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.103 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:01.672 01:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:01.672 01:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:01.672 01:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:01.672 01:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:01.672 01:59:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:03.577 01:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:03.577 01:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:03.577 01:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:24:03.577 01:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:03.577 01:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:03.577 01:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:03.577 01:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:03.577 01:59:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:04.513 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:04.513 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:04.513 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:04.513 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:04.513 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:06.482 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:06.482 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:06.482 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:24:06.483 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:06.483 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:06.483 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:06.483 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:06.483 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:07.054 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:07.054 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:07.054 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:07.054 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:07.054 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:09.584 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:09.584 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:09.584 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:24:09.584 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:09.584 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:09.584 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:09.584 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.584 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:09.842 01:59:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:09.842 01:59:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:09.842 01:59:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:09.842 01:59:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:09.842 01:59:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:11.744 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:12.002 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:12.002 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:24:12.002 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:12.002 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:12.002 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:12.002 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:12.002 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:12.934 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:12.934 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:12.934 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:12.934 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:12.935 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:14.838 01:59:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:14.838 01:59:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:14.838 01:59:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:24:14.838 01:59:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:14.838 01:59:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:14.838 01:59:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:14.838 01:59:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:14.838 01:59:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:15.778 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:15.778 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:15.778 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:15.778 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:15.778 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:17.681 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:17.681 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:17.681 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:24:17.681 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:17.681 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:17.681 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:17.681 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.681 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:18.615 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:18.615 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:18.615 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:18.615 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:18.615 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:20.514 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:20.514 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:20.514 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:24:20.514 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:20.514 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:20.514 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:20.514 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.514 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:21.080 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:21.080 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:21.080 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:21.080 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:21.080 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:23.611 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:23.611 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:23.611 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:24:23.611 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:23.611 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:23.611 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:23.611 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.611 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:23.868 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:23.868 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:23.868 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:23.868 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:23.868 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:26.395 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:26.395 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:26.395 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:24:26.395 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:26.395 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:26.395 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:26.395 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:26.395 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:26.967 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:26.967 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:26.967 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:26.967 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:26.967 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:28.898 02:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:28.898 02:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:28.898 02:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:24:28.898 02:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:28.898 02:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:28.898 02:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:28.898 02:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:28.898 02:00:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:29.829 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:29.829 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:29.829 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:29.829 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:29.829 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:31.749 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:31.749 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:31.749 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:24:31.749 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:31.749 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:31.749 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:31.749 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:31.749 [global] 00:24:31.749 thread=1 00:24:31.749 invalidate=1 00:24:31.749 rw=read 00:24:31.749 time_based=1 00:24:31.749 runtime=10 00:24:31.749 ioengine=libaio 00:24:31.749 direct=1 00:24:31.749 bs=262144 00:24:31.749 iodepth=64 00:24:31.749 norandommap=1 00:24:31.749 numjobs=1 00:24:31.749 00:24:31.749 [job0] 00:24:31.749 filename=/dev/nvme0n1 00:24:31.749 [job1] 00:24:31.749 filename=/dev/nvme10n1 00:24:31.749 [job2] 00:24:31.749 filename=/dev/nvme1n1 00:24:31.749 [job3] 00:24:31.749 filename=/dev/nvme2n1 00:24:31.749 [job4] 00:24:31.749 filename=/dev/nvme3n1 00:24:31.749 [job5] 00:24:31.749 filename=/dev/nvme4n1 00:24:31.749 [job6] 00:24:31.749 filename=/dev/nvme5n1 00:24:31.749 [job7] 00:24:31.749 filename=/dev/nvme6n1 00:24:31.749 [job8] 00:24:31.749 filename=/dev/nvme7n1 00:24:31.749 [job9] 00:24:31.749 filename=/dev/nvme8n1 00:24:31.749 [job10] 00:24:31.749 filename=/dev/nvme9n1 00:24:31.749 Could not set queue depth (nvme0n1) 00:24:31.749 Could not set queue depth (nvme10n1) 00:24:31.749 Could not set queue depth (nvme1n1) 00:24:31.749 Could not set queue depth (nvme2n1) 00:24:31.749 Could not set queue depth (nvme3n1) 00:24:31.749 Could not set queue depth (nvme4n1) 00:24:31.749 Could not set queue depth (nvme5n1) 00:24:31.749 Could not set queue depth (nvme6n1) 00:24:31.749 Could not set queue depth (nvme7n1) 00:24:31.749 Could not set queue depth (nvme8n1) 00:24:31.749 Could not set queue depth (nvme9n1) 00:24:32.006 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.006 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.006 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.006 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.006 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.006 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.006 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.006 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.006 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.006 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.006 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.006 fio-3.35 00:24:32.006 Starting 11 threads 00:24:44.200 00:24:44.201 job0: (groupid=0, jobs=1): err= 0: pid=2324300: Fri Jul 26 02:00:24 2024 00:24:44.201 read: IOPS=603, BW=151MiB/s (158MB/s)(1510MiB/10014msec) 00:24:44.201 slat (usec): min=9, max=99813, avg=1516.33, stdev=5118.35 00:24:44.201 clat (usec): min=1005, max=284536, avg=104533.57, stdev=51060.58 00:24:44.201 lat (usec): min=1025, max=321119, avg=106049.90, stdev=51912.22 00:24:44.201 clat percentiles (msec): 00:24:44.201 | 1.00th=[ 8], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 63], 00:24:44.201 | 30.00th=[ 72], 40.00th=[ 86], 50.00th=[ 101], 60.00th=[ 112], 00:24:44.201 | 70.00th=[ 124], 80.00th=[ 140], 90.00th=[ 178], 95.00th=[ 203], 00:24:44.201 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 279], 99.95th=[ 284], 00:24:44.201 | 99.99th=[ 284] 00:24:44.201 bw ( KiB/s): min=64512, max=280576, per=7.94%, avg=152940.65, stdev=57966.47, samples=20 00:24:44.201 iops : min= 252, max= 1096, avg=597.30, stdev=226.44, samples=20 00:24:44.201 lat (msec) : 2=0.08%, 4=0.35%, 10=1.03%, 20=1.31%, 50=8.51% 00:24:44.201 lat (msec) : 100=38.80%, 250=48.72%, 500=1.21% 00:24:44.201 cpu : usr=0.40%, sys=2.10%, ctx=1168, majf=0, minf=4097 00:24:44.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:44.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.201 issued rwts: total=6039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.201 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.201 job1: (groupid=0, jobs=1): err= 0: pid=2324301: Fri Jul 26 02:00:24 2024 00:24:44.201 read: IOPS=759, BW=190MiB/s (199MB/s)(1920MiB/10113msec) 00:24:44.201 slat (usec): min=8, max=147924, avg=674.70, stdev=4881.07 00:24:44.201 clat (usec): min=1036, max=273750, avg=83547.96, stdev=60511.97 00:24:44.201 lat (usec): min=1087, max=393634, avg=84222.65, stdev=61139.35 00:24:44.201 clat percentiles (msec): 00:24:44.201 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 28], 00:24:44.201 | 30.00th=[ 47], 40.00th=[ 59], 50.00th=[ 71], 60.00th=[ 91], 00:24:44.201 | 70.00th=[ 109], 80.00th=[ 138], 90.00th=[ 171], 95.00th=[ 194], 00:24:44.201 | 99.00th=[ 251], 99.50th=[ 259], 99.90th=[ 266], 99.95th=[ 271], 00:24:44.201 | 99.99th=[ 275] 00:24:44.201 bw ( KiB/s): min=103424, max=331264, per=10.12%, avg=194892.40, stdev=67442.01, samples=20 00:24:44.201 iops : min= 404, max= 1294, avg=761.20, stdev=263.51, samples=20 00:24:44.201 lat (msec) : 2=0.66%, 4=1.86%, 10=8.70%, 20=4.27%, 50=17.24% 00:24:44.201 lat (msec) : 100=33.19%, 250=32.89%, 500=1.19% 00:24:44.201 cpu : usr=0.32%, sys=1.93%, ctx=1851, majf=0, minf=4097 00:24:44.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:44.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.201 issued rwts: total=7678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.201 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.201 job2: (groupid=0, jobs=1): err= 0: pid=2324302: Fri Jul 26 02:00:24 2024 00:24:44.201 read: IOPS=782, BW=196MiB/s (205MB/s)(1968MiB/10062msec) 00:24:44.201 slat (usec): min=9, max=61998, avg=1089.22, stdev=3409.61 00:24:44.201 clat (msec): min=2, max=235, avg=80.68, stdev=35.70 00:24:44.201 lat (msec): min=2, max=235, avg=81.77, stdev=36.10 00:24:44.201 clat percentiles (msec): 00:24:44.201 | 1.00th=[ 19], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 43], 00:24:44.201 | 30.00th=[ 59], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 94], 00:24:44.201 | 70.00th=[ 106], 80.00th=[ 114], 90.00th=[ 125], 95.00th=[ 136], 00:24:44.201 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 186], 99.95th=[ 190], 00:24:44.201 | 99.99th=[ 236] 00:24:44.201 bw ( KiB/s): min=118272, max=482304, per=10.38%, avg=199804.75, stdev=86544.94, samples=20 00:24:44.201 iops : min= 462, max= 1884, avg=780.40, stdev=338.12, samples=20 00:24:44.201 lat (msec) : 4=0.20%, 10=0.25%, 20=0.70%, 50=24.84%, 100=38.53% 00:24:44.201 lat (msec) : 250=35.48% 00:24:44.201 cpu : usr=0.47%, sys=2.35%, ctx=1568, majf=0, minf=4097 00:24:44.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:44.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.201 issued rwts: total=7870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.201 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.201 job3: (groupid=0, jobs=1): err= 0: pid=2324303: Fri Jul 26 02:00:24 2024 00:24:44.201 read: IOPS=515, BW=129MiB/s (135MB/s)(1303MiB/10114msec) 00:24:44.201 slat (usec): min=13, max=82732, avg=1758.64, stdev=5747.00 00:24:44.201 clat (msec): min=7, max=298, avg=122.35, stdev=53.00 00:24:44.201 lat (msec): min=7, max=338, avg=124.11, stdev=53.82 00:24:44.201 clat percentiles (msec): 00:24:44.201 | 1.00th=[ 13], 5.00th=[ 36], 10.00th=[ 55], 20.00th=[ 84], 00:24:44.201 | 30.00th=[ 96], 40.00th=[ 105], 50.00th=[ 117], 60.00th=[ 130], 00:24:44.201 | 70.00th=[ 144], 80.00th=[ 163], 90.00th=[ 197], 95.00th=[ 220], 00:24:44.201 | 99.00th=[ 264], 99.50th=[ 275], 99.90th=[ 284], 99.95th=[ 288], 00:24:44.201 | 99.99th=[ 300] 00:24:44.201 bw ( KiB/s): min=56832, max=264192, per=6.84%, avg=131753.75, stdev=52924.08, samples=20 00:24:44.201 iops : min= 222, max= 1032, avg=514.50, stdev=206.70, samples=20 00:24:44.201 lat (msec) : 10=0.31%, 20=1.42%, 50=7.04%, 100=26.44%, 250=62.29% 00:24:44.201 lat (msec) : 500=2.49% 00:24:44.201 cpu : usr=0.37%, sys=1.88%, ctx=1039, majf=0, minf=4097 00:24:44.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:44.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.201 issued rwts: total=5211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.201 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.201 job4: (groupid=0, jobs=1): err= 0: pid=2324304: Fri Jul 26 02:00:24 2024 00:24:44.201 read: IOPS=810, BW=203MiB/s (212MB/s)(2039MiB/10060msec) 00:24:44.201 slat (usec): min=9, max=74869, avg=982.04, stdev=3870.42 00:24:44.201 clat (msec): min=2, max=258, avg=77.92, stdev=47.79 00:24:44.201 lat (msec): min=2, max=296, avg=78.90, stdev=48.37 00:24:44.201 clat percentiles (msec): 00:24:44.201 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 29], 20.00th=[ 31], 00:24:44.201 | 30.00th=[ 40], 40.00th=[ 58], 50.00th=[ 72], 60.00th=[ 88], 00:24:44.201 | 70.00th=[ 103], 80.00th=[ 116], 90.00th=[ 140], 95.00th=[ 169], 00:24:44.201 | 99.00th=[ 215], 99.50th=[ 222], 99.90th=[ 247], 99.95th=[ 251], 00:24:44.201 | 99.99th=[ 259] 00:24:44.201 bw ( KiB/s): min=102912, max=481341, per=10.75%, avg=207074.50, stdev=113934.96, samples=20 00:24:44.201 iops : min= 402, max= 1880, avg=808.75, stdev=445.06, samples=20 00:24:44.201 lat (msec) : 4=0.16%, 10=1.48%, 20=3.78%, 50=29.73%, 100=33.42% 00:24:44.201 lat (msec) : 250=31.33%, 500=0.10% 00:24:44.201 cpu : usr=0.36%, sys=2.33%, ctx=1629, majf=0, minf=4097 00:24:44.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:44.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.201 issued rwts: total=8154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.201 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.201 job5: (groupid=0, jobs=1): err= 0: pid=2324305: Fri Jul 26 02:00:24 2024 00:24:44.201 read: IOPS=634, BW=159MiB/s (166MB/s)(1606MiB/10116msec) 00:24:44.201 slat (usec): min=9, max=176148, avg=935.05, stdev=4347.78 00:24:44.201 clat (usec): min=767, max=273368, avg=99797.48, stdev=43976.14 00:24:44.201 lat (usec): min=789, max=411404, avg=100732.53, stdev=44472.00 00:24:44.201 clat percentiles (msec): 00:24:44.201 | 1.00th=[ 6], 5.00th=[ 29], 10.00th=[ 54], 20.00th=[ 65], 00:24:44.201 | 30.00th=[ 74], 40.00th=[ 85], 50.00th=[ 97], 60.00th=[ 110], 00:24:44.201 | 70.00th=[ 120], 80.00th=[ 132], 90.00th=[ 155], 95.00th=[ 171], 00:24:44.201 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 271], 99.95th=[ 271], 00:24:44.201 | 99.99th=[ 275] 00:24:44.201 bw ( KiB/s): min=95232, max=263168, per=8.45%, avg=162715.65, stdev=42455.40, samples=20 00:24:44.201 iops : min= 372, max= 1028, avg=635.50, stdev=165.84, samples=20 00:24:44.201 lat (usec) : 1000=0.06% 00:24:44.201 lat (msec) : 2=0.08%, 4=0.56%, 10=0.97%, 20=1.59%, 50=5.50% 00:24:44.201 lat (msec) : 100=43.41%, 250=46.53%, 500=1.31% 00:24:44.201 cpu : usr=0.36%, sys=1.71%, ctx=1450, majf=0, minf=4097 00:24:44.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:44.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.201 issued rwts: total=6422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.201 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.201 job6: (groupid=0, jobs=1): err= 0: pid=2324306: Fri Jul 26 02:00:24 2024 00:24:44.201 read: IOPS=701, BW=175MiB/s (184MB/s)(1774MiB/10115msec) 00:24:44.201 slat (usec): min=9, max=213808, avg=790.29, stdev=4033.68 00:24:44.201 clat (usec): min=1009, max=298372, avg=90366.81, stdev=52937.04 00:24:44.202 lat (usec): min=1080, max=339010, avg=91157.09, stdev=53480.74 00:24:44.202 clat percentiles (msec): 00:24:44.202 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 27], 20.00th=[ 44], 00:24:44.202 | 30.00th=[ 56], 40.00th=[ 74], 50.00th=[ 89], 60.00th=[ 103], 00:24:44.202 | 70.00th=[ 115], 80.00th=[ 129], 90.00th=[ 157], 95.00th=[ 190], 00:24:44.202 | 99.00th=[ 264], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 296], 00:24:44.202 | 99.99th=[ 300] 00:24:44.202 bw ( KiB/s): min=77312, max=375808, per=9.35%, avg=179991.00, stdev=68775.22, samples=20 00:24:44.202 iops : min= 302, max= 1468, avg=703.00, stdev=268.67, samples=20 00:24:44.202 lat (msec) : 2=0.07%, 4=2.07%, 10=3.09%, 20=2.71%, 50=17.53% 00:24:44.202 lat (msec) : 100=32.51%, 250=40.95%, 500=1.07% 00:24:44.202 cpu : usr=0.25%, sys=1.85%, ctx=1698, majf=0, minf=3721 00:24:44.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:44.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.202 issued rwts: total=7096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.202 job7: (groupid=0, jobs=1): err= 0: pid=2324307: Fri Jul 26 02:00:24 2024 00:24:44.202 read: IOPS=617, BW=154MiB/s (162MB/s)(1553MiB/10064msec) 00:24:44.202 slat (usec): min=9, max=97436, avg=983.40, stdev=3849.55 00:24:44.202 clat (msec): min=3, max=257, avg=102.63, stdev=42.20 00:24:44.202 lat (msec): min=3, max=351, avg=103.61, stdev=42.64 00:24:44.202 clat percentiles (msec): 00:24:44.202 | 1.00th=[ 18], 5.00th=[ 35], 10.00th=[ 48], 20.00th=[ 69], 00:24:44.202 | 30.00th=[ 85], 40.00th=[ 95], 50.00th=[ 104], 60.00th=[ 110], 00:24:44.202 | 70.00th=[ 117], 80.00th=[ 127], 90.00th=[ 153], 95.00th=[ 184], 00:24:44.202 | 99.00th=[ 236], 99.50th=[ 249], 99.90th=[ 255], 99.95th=[ 257], 00:24:44.202 | 99.99th=[ 257] 00:24:44.202 bw ( KiB/s): min=106496, max=224319, per=8.17%, avg=157354.65, stdev=34579.67, samples=20 00:24:44.202 iops : min= 416, max= 876, avg=614.60, stdev=135.03, samples=20 00:24:44.202 lat (msec) : 4=0.02%, 10=0.11%, 20=1.30%, 50=9.21%, 100=35.66% 00:24:44.202 lat (msec) : 250=53.20%, 500=0.50% 00:24:44.202 cpu : usr=0.22%, sys=1.84%, ctx=1337, majf=0, minf=4097 00:24:44.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:44.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.202 issued rwts: total=6212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.202 job8: (groupid=0, jobs=1): err= 0: pid=2324308: Fri Jul 26 02:00:24 2024 00:24:44.202 read: IOPS=581, BW=145MiB/s (152MB/s)(1470MiB/10115msec) 00:24:44.202 slat (usec): min=9, max=87625, avg=1291.80, stdev=4579.36 00:24:44.202 clat (msec): min=3, max=304, avg=108.71, stdev=48.94 00:24:44.202 lat (msec): min=3, max=334, avg=110.01, stdev=49.52 00:24:44.202 clat percentiles (msec): 00:24:44.202 | 1.00th=[ 10], 5.00th=[ 41], 10.00th=[ 59], 20.00th=[ 72], 00:24:44.202 | 30.00th=[ 81], 40.00th=[ 88], 50.00th=[ 99], 60.00th=[ 116], 00:24:44.202 | 70.00th=[ 132], 80.00th=[ 146], 90.00th=[ 167], 95.00th=[ 201], 00:24:44.202 | 99.00th=[ 262], 99.50th=[ 279], 99.90th=[ 300], 99.95th=[ 305], 00:24:44.202 | 99.99th=[ 305] 00:24:44.202 bw ( KiB/s): min=53760, max=239137, per=7.73%, avg=148831.80, stdev=47356.71, samples=20 00:24:44.202 iops : min= 210, max= 934, avg=581.30, stdev=184.95, samples=20 00:24:44.202 lat (msec) : 4=0.02%, 10=1.04%, 20=2.14%, 50=2.89%, 100=45.15% 00:24:44.202 lat (msec) : 250=47.09%, 500=1.67% 00:24:44.202 cpu : usr=0.26%, sys=1.82%, ctx=1300, majf=0, minf=4097 00:24:44.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:44.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.202 issued rwts: total=5880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.202 job9: (groupid=0, jobs=1): err= 0: pid=2324309: Fri Jul 26 02:00:24 2024 00:24:44.202 read: IOPS=726, BW=182MiB/s (190MB/s)(1828MiB/10062msec) 00:24:44.202 slat (usec): min=10, max=156898, avg=1112.79, stdev=4321.04 00:24:44.202 clat (msec): min=2, max=334, avg=86.89, stdev=49.25 00:24:44.202 lat (msec): min=2, max=334, avg=88.00, stdev=49.83 00:24:44.202 clat percentiles (msec): 00:24:44.202 | 1.00th=[ 11], 5.00th=[ 28], 10.00th=[ 32], 20.00th=[ 36], 00:24:44.202 | 30.00th=[ 57], 40.00th=[ 73], 50.00th=[ 87], 60.00th=[ 99], 00:24:44.202 | 70.00th=[ 108], 80.00th=[ 118], 90.00th=[ 138], 95.00th=[ 174], 00:24:44.202 | 99.00th=[ 264], 99.50th=[ 300], 99.90th=[ 305], 99.95th=[ 305], 00:24:44.202 | 99.99th=[ 334] 00:24:44.202 bw ( KiB/s): min=95232, max=473088, per=9.64%, avg=185524.00, stdev=85943.80, samples=20 00:24:44.202 iops : min= 372, max= 1848, avg=724.65, stdev=335.73, samples=20 00:24:44.202 lat (msec) : 4=0.08%, 10=0.74%, 20=2.19%, 50=24.00%, 100=34.56% 00:24:44.202 lat (msec) : 250=36.90%, 500=1.53% 00:24:44.202 cpu : usr=0.46%, sys=2.45%, ctx=1491, majf=0, minf=4097 00:24:44.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:44.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.202 issued rwts: total=7312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.202 job10: (groupid=0, jobs=1): err= 0: pid=2324310: Fri Jul 26 02:00:24 2024 00:24:44.202 read: IOPS=811, BW=203MiB/s (213MB/s)(2052MiB/10116msec) 00:24:44.202 slat (usec): min=9, max=81898, avg=1024.80, stdev=3595.82 00:24:44.202 clat (msec): min=2, max=302, avg=77.78, stdev=41.30 00:24:44.202 lat (msec): min=2, max=302, avg=78.80, stdev=41.75 00:24:44.202 clat percentiles (msec): 00:24:44.202 | 1.00th=[ 8], 5.00th=[ 23], 10.00th=[ 37], 20.00th=[ 44], 00:24:44.202 | 30.00th=[ 53], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 80], 00:24:44.202 | 70.00th=[ 89], 80.00th=[ 108], 90.00th=[ 140], 95.00th=[ 161], 00:24:44.202 | 99.00th=[ 197], 99.50th=[ 209], 99.90th=[ 264], 99.95th=[ 275], 00:24:44.202 | 99.99th=[ 305] 00:24:44.202 bw ( KiB/s): min=99840, max=394475, per=10.83%, avg=208480.60, stdev=74090.86, samples=20 00:24:44.202 iops : min= 390, max= 1540, avg=814.25, stdev=289.34, samples=20 00:24:44.202 lat (msec) : 4=0.10%, 10=1.69%, 20=2.79%, 50=23.19%, 100=49.41% 00:24:44.202 lat (msec) : 250=22.69%, 500=0.12% 00:24:44.202 cpu : usr=0.42%, sys=2.67%, ctx=1595, majf=0, minf=4097 00:24:44.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:44.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.202 issued rwts: total=8209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.202 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.202 00:24:44.202 Run status group 0 (all jobs): 00:24:44.202 READ: bw=1880MiB/s (1972MB/s), 129MiB/s-203MiB/s (135MB/s-213MB/s), io=18.6GiB (19.9GB), run=10014-10116msec 00:24:44.202 00:24:44.202 Disk stats (read/write): 00:24:44.202 nvme0n1: ios=11760/0, merge=0/0, ticks=1239624/0, in_queue=1239624, util=97.28% 00:24:44.202 nvme10n1: ios=15194/0, merge=0/0, ticks=1243655/0, in_queue=1243655, util=97.48% 00:24:44.202 nvme1n1: ios=15540/0, merge=0/0, ticks=1238354/0, in_queue=1238354, util=97.75% 00:24:44.202 nvme2n1: ios=10242/0, merge=0/0, ticks=1229096/0, in_queue=1229096, util=97.87% 00:24:44.202 nvme3n1: ios=16136/0, merge=0/0, ticks=1240010/0, in_queue=1240010, util=97.95% 00:24:44.202 nvme4n1: ios=12683/0, merge=0/0, ticks=1241201/0, in_queue=1241201, util=98.27% 00:24:44.202 nvme5n1: ios=14042/0, merge=0/0, ticks=1242425/0, in_queue=1242425, util=98.44% 00:24:44.202 nvme6n1: ios=12243/0, merge=0/0, ticks=1244408/0, in_queue=1244408, util=98.54% 00:24:44.202 nvme7n1: ios=11584/0, merge=0/0, ticks=1235155/0, in_queue=1235155, util=98.91% 00:24:44.202 nvme8n1: ios=14461/0, merge=0/0, ticks=1238771/0, in_queue=1238771, util=99.08% 00:24:44.202 nvme9n1: ios=16249/0, merge=0/0, ticks=1234729/0, in_queue=1234729, util=99.20% 00:24:44.202 02:00:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:44.202 [global] 00:24:44.202 thread=1 00:24:44.202 invalidate=1 00:24:44.202 rw=randwrite 00:24:44.202 time_based=1 00:24:44.202 runtime=10 00:24:44.202 ioengine=libaio 00:24:44.202 direct=1 00:24:44.202 bs=262144 00:24:44.202 iodepth=64 00:24:44.202 norandommap=1 00:24:44.202 numjobs=1 00:24:44.202 00:24:44.202 [job0] 00:24:44.202 filename=/dev/nvme0n1 00:24:44.202 [job1] 00:24:44.202 filename=/dev/nvme10n1 00:24:44.202 [job2] 00:24:44.202 filename=/dev/nvme1n1 00:24:44.202 [job3] 00:24:44.202 filename=/dev/nvme2n1 00:24:44.202 [job4] 00:24:44.202 filename=/dev/nvme3n1 00:24:44.202 [job5] 00:24:44.202 filename=/dev/nvme4n1 00:24:44.202 [job6] 00:24:44.202 filename=/dev/nvme5n1 00:24:44.202 [job7] 00:24:44.202 filename=/dev/nvme6n1 00:24:44.202 [job8] 00:24:44.202 filename=/dev/nvme7n1 00:24:44.202 [job9] 00:24:44.202 filename=/dev/nvme8n1 00:24:44.202 [job10] 00:24:44.202 filename=/dev/nvme9n1 00:24:44.202 Could not set queue depth (nvme0n1) 00:24:44.202 Could not set queue depth (nvme10n1) 00:24:44.202 Could not set queue depth (nvme1n1) 00:24:44.202 Could not set queue depth (nvme2n1) 00:24:44.202 Could not set queue depth (nvme3n1) 00:24:44.202 Could not set queue depth (nvme4n1) 00:24:44.202 Could not set queue depth (nvme5n1) 00:24:44.202 Could not set queue depth (nvme6n1) 00:24:44.202 Could not set queue depth (nvme7n1) 00:24:44.202 Could not set queue depth (nvme8n1) 00:24:44.202 Could not set queue depth (nvme9n1) 00:24:44.203 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.203 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.203 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.203 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.203 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.203 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.203 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.203 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.203 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.203 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.203 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.203 fio-3.35 00:24:44.203 Starting 11 threads 00:24:54.181 00:24:54.181 job0: (groupid=0, jobs=1): err= 0: pid=2325475: Fri Jul 26 02:00:35 2024 00:24:54.181 write: IOPS=469, BW=117MiB/s (123MB/s)(1200MiB/10217msec); 0 zone resets 00:24:54.181 slat (usec): min=21, max=179572, avg=1741.94, stdev=5215.37 00:24:54.181 clat (usec): min=1547, max=428114, avg=134374.56, stdev=89341.59 00:24:54.181 lat (msec): min=2, max=428, avg=136.12, stdev=90.51 00:24:54.181 clat percentiles (msec): 00:24:54.181 | 1.00th=[ 9], 5.00th=[ 22], 10.00th=[ 34], 20.00th=[ 42], 00:24:54.181 | 30.00th=[ 44], 40.00th=[ 85], 50.00th=[ 138], 60.00th=[ 163], 00:24:54.181 | 70.00th=[ 207], 80.00th=[ 230], 90.00th=[ 249], 95.00th=[ 268], 00:24:54.181 | 99.00th=[ 330], 99.50th=[ 359], 99.90th=[ 414], 99.95th=[ 414], 00:24:54.181 | 99.99th=[ 430] 00:24:54.181 bw ( KiB/s): min=63488, max=374784, per=9.18%, avg=121280.10, stdev=88863.72, samples=20 00:24:54.181 iops : min= 248, max= 1464, avg=473.70, stdev=347.11, samples=20 00:24:54.181 lat (msec) : 2=0.02%, 4=0.12%, 10=1.04%, 20=3.46%, 50=27.72% 00:24:54.181 lat (msec) : 100=10.02%, 250=47.80%, 500=9.81% 00:24:54.181 cpu : usr=1.45%, sys=1.55%, ctx=2205, majf=0, minf=1 00:24:54.181 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:54.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.181 issued rwts: total=0,4801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.181 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.181 job1: (groupid=0, jobs=1): err= 0: pid=2325489: Fri Jul 26 02:00:35 2024 00:24:54.181 write: IOPS=541, BW=135MiB/s (142MB/s)(1384MiB/10216msec); 0 zone resets 00:24:54.181 slat (usec): min=17, max=40182, avg=1506.66, stdev=3761.65 00:24:54.181 clat (usec): min=1222, max=444937, avg=116461.05, stdev=79653.99 00:24:54.181 lat (usec): min=1291, max=444999, avg=117967.72, stdev=80692.46 00:24:54.181 clat percentiles (msec): 00:24:54.181 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 29], 20.00th=[ 41], 00:24:54.181 | 30.00th=[ 55], 40.00th=[ 74], 50.00th=[ 102], 60.00th=[ 127], 00:24:54.181 | 70.00th=[ 159], 80.00th=[ 211], 90.00th=[ 239], 95.00th=[ 247], 00:24:54.181 | 99.00th=[ 259], 99.50th=[ 326], 99.90th=[ 435], 99.95th=[ 435], 00:24:54.181 | 99.99th=[ 447] 00:24:54.181 bw ( KiB/s): min=65536, max=365056, per=10.60%, avg=140117.25, stdev=91002.71, samples=20 00:24:54.181 iops : min= 256, max= 1426, avg=547.30, stdev=355.47, samples=20 00:24:54.181 lat (msec) : 2=0.25%, 4=1.21%, 10=2.80%, 20=3.50%, 50=20.64% 00:24:54.181 lat (msec) : 100=20.41%, 250=46.97%, 500=4.21% 00:24:54.181 cpu : usr=1.65%, sys=1.69%, ctx=2456, majf=0, minf=1 00:24:54.181 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:54.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.181 issued rwts: total=0,5537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.181 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.181 job2: (groupid=0, jobs=1): err= 0: pid=2325490: Fri Jul 26 02:00:35 2024 00:24:54.181 write: IOPS=387, BW=97.0MiB/s (102MB/s)(991MiB/10215msec); 0 zone resets 00:24:54.181 slat (usec): min=20, max=72529, avg=1984.84, stdev=5417.07 00:24:54.181 clat (usec): min=991, max=460619, avg=162887.96, stdev=95191.53 00:24:54.181 lat (usec): min=1038, max=460658, avg=164872.81, stdev=96478.09 00:24:54.181 clat percentiles (msec): 00:24:54.181 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 14], 20.00th=[ 40], 00:24:54.181 | 30.00th=[ 110], 40.00th=[ 169], 50.00th=[ 199], 60.00th=[ 213], 00:24:54.181 | 70.00th=[ 228], 80.00th=[ 243], 90.00th=[ 262], 95.00th=[ 275], 00:24:54.181 | 99.00th=[ 321], 99.50th=[ 393], 99.90th=[ 447], 99.95th=[ 460], 00:24:54.181 | 99.99th=[ 460] 00:24:54.181 bw ( KiB/s): min=61440, max=261620, per=7.55%, avg=99829.70, stdev=49453.59, samples=20 00:24:54.181 iops : min= 240, max= 1021, avg=389.90, stdev=193.00, samples=20 00:24:54.181 lat (usec) : 1000=0.03% 00:24:54.181 lat (msec) : 2=0.30%, 4=1.09%, 10=5.12%, 20=8.02%, 50=7.97% 00:24:54.181 lat (msec) : 100=5.70%, 250=54.71%, 500=17.06% 00:24:54.181 cpu : usr=1.21%, sys=1.24%, ctx=2249, majf=0, minf=1 00:24:54.181 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:54.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.181 issued rwts: total=0,3963,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.181 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.181 job3: (groupid=0, jobs=1): err= 0: pid=2325491: Fri Jul 26 02:00:35 2024 00:24:54.181 write: IOPS=467, BW=117MiB/s (123MB/s)(1179MiB/10083msec); 0 zone resets 00:24:54.181 slat (usec): min=22, max=58403, avg=1656.74, stdev=4371.78 00:24:54.181 clat (msec): min=3, max=314, avg=135.17, stdev=75.47 00:24:54.181 lat (msec): min=3, max=314, avg=136.83, stdev=76.54 00:24:54.181 clat percentiles (msec): 00:24:54.181 | 1.00th=[ 11], 5.00th=[ 20], 10.00th=[ 35], 20.00th=[ 66], 00:24:54.181 | 30.00th=[ 88], 40.00th=[ 113], 50.00th=[ 126], 60.00th=[ 150], 00:24:54.181 | 70.00th=[ 171], 80.00th=[ 207], 90.00th=[ 259], 95.00th=[ 275], 00:24:54.181 | 99.00th=[ 288], 99.50th=[ 292], 99.90th=[ 300], 99.95th=[ 305], 00:24:54.181 | 99.99th=[ 317] 00:24:54.181 bw ( KiB/s): min=59392, max=206336, per=9.01%, avg=119070.20, stdev=43638.08, samples=20 00:24:54.181 iops : min= 232, max= 806, avg=465.10, stdev=170.47, samples=20 00:24:54.182 lat (msec) : 4=0.04%, 10=0.83%, 20=4.18%, 50=10.20%, 100=19.22% 00:24:54.182 lat (msec) : 250=53.78%, 500=11.75% 00:24:54.182 cpu : usr=1.43%, sys=1.70%, ctx=2477, majf=0, minf=1 00:24:54.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:54.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.182 issued rwts: total=0,4714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.182 job4: (groupid=0, jobs=1): err= 0: pid=2325492: Fri Jul 26 02:00:35 2024 00:24:54.182 write: IOPS=498, BW=125MiB/s (131MB/s)(1258MiB/10083msec); 0 zone resets 00:24:54.182 slat (usec): min=19, max=101213, avg=1188.24, stdev=3744.96 00:24:54.182 clat (usec): min=896, max=327752, avg=127055.91, stdev=71339.90 00:24:54.182 lat (usec): min=962, max=329998, avg=128244.15, stdev=72042.53 00:24:54.182 clat percentiles (msec): 00:24:54.182 | 1.00th=[ 4], 5.00th=[ 17], 10.00th=[ 36], 20.00th=[ 69], 00:24:54.182 | 30.00th=[ 81], 40.00th=[ 102], 50.00th=[ 122], 60.00th=[ 140], 00:24:54.182 | 70.00th=[ 161], 80.00th=[ 188], 90.00th=[ 228], 95.00th=[ 259], 00:24:54.182 | 99.00th=[ 296], 99.50th=[ 300], 99.90th=[ 317], 99.95th=[ 321], 00:24:54.182 | 99.99th=[ 330] 00:24:54.182 bw ( KiB/s): min=59392, max=221184, per=9.62%, avg=127135.80, stdev=38298.69, samples=20 00:24:54.182 iops : min= 232, max= 864, avg=496.60, stdev=149.63, samples=20 00:24:54.182 lat (usec) : 1000=0.04% 00:24:54.182 lat (msec) : 2=0.18%, 4=0.89%, 10=2.17%, 20=2.70%, 50=8.63% 00:24:54.182 lat (msec) : 100=24.55%, 250=54.87%, 500=5.96% 00:24:54.182 cpu : usr=1.64%, sys=1.48%, ctx=3123, majf=0, minf=1 00:24:54.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:54.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.182 issued rwts: total=0,5030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.182 job5: (groupid=0, jobs=1): err= 0: pid=2325493: Fri Jul 26 02:00:35 2024 00:24:54.182 write: IOPS=469, BW=117MiB/s (123MB/s)(1198MiB/10214msec); 0 zone resets 00:24:54.182 slat (usec): min=21, max=160414, avg=1732.29, stdev=5150.88 00:24:54.182 clat (usec): min=970, max=448684, avg=134238.84, stdev=88926.98 00:24:54.182 lat (usec): min=1013, max=448723, avg=135971.14, stdev=90162.89 00:24:54.182 clat percentiles (msec): 00:24:54.182 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 28], 20.00th=[ 48], 00:24:54.182 | 30.00th=[ 55], 40.00th=[ 87], 50.00th=[ 123], 60.00th=[ 157], 00:24:54.182 | 70.00th=[ 209], 80.00th=[ 230], 90.00th=[ 253], 95.00th=[ 271], 00:24:54.182 | 99.00th=[ 300], 99.50th=[ 342], 99.90th=[ 439], 99.95th=[ 439], 00:24:54.182 | 99.99th=[ 447] 00:24:54.182 bw ( KiB/s): min=45056, max=270336, per=9.16%, avg=121002.20, stdev=72951.73, samples=20 00:24:54.182 iops : min= 176, max= 1056, avg=472.60, stdev=284.89, samples=20 00:24:54.182 lat (usec) : 1000=0.02% 00:24:54.182 lat (msec) : 2=0.31%, 4=0.86%, 10=2.88%, 20=3.65%, 50=19.52% 00:24:54.182 lat (msec) : 100=16.64%, 250=45.61%, 500=10.52% 00:24:54.182 cpu : usr=1.20%, sys=1.56%, ctx=2374, majf=0, minf=1 00:24:54.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:54.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.182 issued rwts: total=0,4791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.182 job6: (groupid=0, jobs=1): err= 0: pid=2325494: Fri Jul 26 02:00:35 2024 00:24:54.182 write: IOPS=547, BW=137MiB/s (143MB/s)(1377MiB/10064msec); 0 zone resets 00:24:54.182 slat (usec): min=19, max=81330, avg=1571.96, stdev=3972.32 00:24:54.182 clat (msec): min=3, max=322, avg=115.03, stdev=65.64 00:24:54.182 lat (msec): min=3, max=322, avg=116.61, stdev=66.53 00:24:54.182 clat percentiles (msec): 00:24:54.182 | 1.00th=[ 12], 5.00th=[ 32], 10.00th=[ 41], 20.00th=[ 43], 00:24:54.182 | 30.00th=[ 58], 40.00th=[ 92], 50.00th=[ 116], 60.00th=[ 129], 00:24:54.182 | 70.00th=[ 153], 80.00th=[ 174], 90.00th=[ 205], 95.00th=[ 230], 00:24:54.182 | 99.00th=[ 275], 99.50th=[ 296], 99.90th=[ 317], 99.95th=[ 321], 00:24:54.182 | 99.99th=[ 321] 00:24:54.182 bw ( KiB/s): min=57344, max=380928, per=10.55%, avg=139390.70, stdev=78593.68, samples=20 00:24:54.182 iops : min= 224, max= 1488, avg=544.45, stdev=307.05, samples=20 00:24:54.182 lat (msec) : 4=0.04%, 10=0.64%, 20=2.00%, 50=24.15%, 100=15.67% 00:24:54.182 lat (msec) : 250=54.48%, 500=3.03% 00:24:54.182 cpu : usr=1.63%, sys=1.76%, ctx=2100, majf=0, minf=1 00:24:54.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:54.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.182 issued rwts: total=0,5508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.182 job7: (groupid=0, jobs=1): err= 0: pid=2325495: Fri Jul 26 02:00:35 2024 00:24:54.182 write: IOPS=434, BW=109MiB/s (114MB/s)(1111MiB/10214msec); 0 zone resets 00:24:54.182 slat (usec): min=20, max=70439, avg=1834.37, stdev=4656.18 00:24:54.182 clat (usec): min=1045, max=444520, avg=145200.05, stdev=83004.97 00:24:54.182 lat (usec): min=1081, max=444555, avg=147034.42, stdev=84117.45 00:24:54.182 clat percentiles (msec): 00:24:54.182 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 27], 20.00th=[ 69], 00:24:54.182 | 30.00th=[ 102], 40.00th=[ 123], 50.00th=[ 138], 60.00th=[ 163], 00:24:54.182 | 70.00th=[ 199], 80.00th=[ 236], 90.00th=[ 253], 95.00th=[ 262], 00:24:54.182 | 99.00th=[ 326], 99.50th=[ 368], 99.90th=[ 435], 99.95th=[ 435], 00:24:54.182 | 99.99th=[ 443] 00:24:54.182 bw ( KiB/s): min=63488, max=225280, per=8.48%, avg=112098.75, stdev=51682.84, samples=20 00:24:54.182 iops : min= 248, max= 880, avg=437.85, stdev=201.86, samples=20 00:24:54.182 lat (msec) : 2=0.18%, 4=0.74%, 10=2.84%, 20=3.78%, 50=8.76% 00:24:54.182 lat (msec) : 100=12.76%, 250=59.60%, 500=11.34% 00:24:54.182 cpu : usr=1.44%, sys=1.32%, ctx=2171, majf=0, minf=1 00:24:54.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:54.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.182 issued rwts: total=0,4443,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.182 job8: (groupid=0, jobs=1): err= 0: pid=2325496: Fri Jul 26 02:00:35 2024 00:24:54.182 write: IOPS=448, BW=112MiB/s (118MB/s)(1147MiB/10218msec); 0 zone resets 00:24:54.182 slat (usec): min=19, max=122223, avg=1564.33, stdev=5220.80 00:24:54.182 clat (usec): min=977, max=458168, avg=140919.70, stdev=98903.26 00:24:54.182 lat (usec): min=1008, max=458230, avg=142484.02, stdev=100189.86 00:24:54.182 clat percentiles (msec): 00:24:54.182 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 30], 20.00th=[ 44], 00:24:54.182 | 30.00th=[ 46], 40.00th=[ 74], 50.00th=[ 136], 60.00th=[ 186], 00:24:54.182 | 70.00th=[ 218], 80.00th=[ 241], 90.00th=[ 271], 95.00th=[ 296], 00:24:54.182 | 99.00th=[ 326], 99.50th=[ 380], 99.90th=[ 447], 99.95th=[ 447], 00:24:54.182 | 99.99th=[ 460] 00:24:54.182 bw ( KiB/s): min=51200, max=268288, per=8.76%, avg=115779.90, stdev=64566.18, samples=20 00:24:54.182 iops : min= 200, max= 1048, avg=452.25, stdev=252.22, samples=20 00:24:54.182 lat (usec) : 1000=0.02% 00:24:54.182 lat (msec) : 2=0.26%, 4=0.55%, 10=1.72%, 20=4.47%, 50=26.01% 00:24:54.182 lat (msec) : 100=12.38%, 250=37.72%, 500=16.87% 00:24:54.182 cpu : usr=1.36%, sys=1.68%, ctx=2636, majf=0, minf=1 00:24:54.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:24:54.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.182 issued rwts: total=0,4587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.182 job9: (groupid=0, jobs=1): err= 0: pid=2325497: Fri Jul 26 02:00:35 2024 00:24:54.182 write: IOPS=475, BW=119MiB/s (125MB/s)(1205MiB/10128msec); 0 zone resets 00:24:54.182 slat (usec): min=16, max=124928, avg=1534.03, stdev=5127.72 00:24:54.182 clat (usec): min=1043, max=364463, avg=132925.87, stdev=86328.59 00:24:54.182 lat (usec): min=1072, max=364499, avg=134459.90, stdev=87538.43 00:24:54.182 clat percentiles (msec): 00:24:54.182 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 23], 20.00th=[ 43], 00:24:54.182 | 30.00th=[ 64], 40.00th=[ 111], 50.00th=[ 134], 60.00th=[ 159], 00:24:54.182 | 70.00th=[ 178], 80.00th=[ 209], 90.00th=[ 251], 95.00th=[ 292], 00:24:54.182 | 99.00th=[ 321], 99.50th=[ 326], 99.90th=[ 330], 99.95th=[ 363], 00:24:54.182 | 99.99th=[ 363] 00:24:54.182 bw ( KiB/s): min=57344, max=257532, per=9.21%, avg=121691.50, stdev=55746.06, samples=20 00:24:54.182 iops : min= 224, max= 1005, avg=475.30, stdev=217.63, samples=20 00:24:54.182 lat (msec) : 2=0.52%, 4=1.10%, 10=3.40%, 20=3.90%, 50=16.69% 00:24:54.182 lat (msec) : 100=12.37%, 250=51.95%, 500=10.07% 00:24:54.182 cpu : usr=1.36%, sys=1.64%, ctx=2766, majf=0, minf=1 00:24:54.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:54.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.182 issued rwts: total=0,4818,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.182 job10: (groupid=0, jobs=1): err= 0: pid=2325498: Fri Jul 26 02:00:35 2024 00:24:54.182 write: IOPS=447, BW=112MiB/s (117MB/s)(1142MiB/10220msec); 0 zone resets 00:24:54.182 slat (usec): min=18, max=93780, avg=1920.40, stdev=5111.23 00:24:54.182 clat (usec): min=1808, max=423347, avg=140859.14, stdev=91105.32 00:24:54.182 lat (usec): min=1861, max=423398, avg=142779.54, stdev=92320.42 00:24:54.182 clat percentiles (msec): 00:24:54.182 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 36], 20.00th=[ 52], 00:24:54.183 | 30.00th=[ 77], 40.00th=[ 93], 50.00th=[ 117], 60.00th=[ 144], 00:24:54.183 | 70.00th=[ 213], 80.00th=[ 241], 90.00th=[ 266], 95.00th=[ 292], 00:24:54.183 | 99.00th=[ 321], 99.50th=[ 347], 99.90th=[ 409], 99.95th=[ 409], 00:24:54.183 | 99.99th=[ 422] 00:24:54.183 bw ( KiB/s): min=55296, max=264192, per=8.72%, avg=115301.40, stdev=65475.07, samples=20 00:24:54.183 iops : min= 216, max= 1032, avg=450.35, stdev=255.74, samples=20 00:24:54.183 lat (msec) : 2=0.04%, 4=0.50%, 10=2.39%, 20=2.71%, 50=13.70% 00:24:54.183 lat (msec) : 100=23.57%, 250=41.52%, 500=15.56% 00:24:54.183 cpu : usr=1.36%, sys=1.38%, ctx=1935, majf=0, minf=1 00:24:54.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:54.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.183 issued rwts: total=0,4569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.183 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.183 00:24:54.183 Run status group 0 (all jobs): 00:24:54.183 WRITE: bw=1291MiB/s (1353MB/s), 97.0MiB/s-137MiB/s (102MB/s-143MB/s), io=12.9GiB (13.8GB), run=10064-10220msec 00:24:54.183 00:24:54.183 Disk stats (read/write): 00:24:54.183 nvme0n1: ios=49/9574, merge=0/0, ticks=31/1239243, in_queue=1239274, util=97.33% 00:24:54.183 nvme10n1: ios=50/11045, merge=0/0, ticks=631/1237853, in_queue=1238484, util=100.00% 00:24:54.183 nvme1n1: ios=0/7899, merge=0/0, ticks=0/1240538, in_queue=1240538, util=97.64% 00:24:54.183 nvme2n1: ios=33/9209, merge=0/0, ticks=74/1217959, in_queue=1218033, util=97.89% 00:24:54.183 nvme3n1: ios=0/9837, merge=0/0, ticks=0/1227965, in_queue=1227965, util=97.79% 00:24:54.183 nvme4n1: ios=42/9556, merge=0/0, ticks=2107/1233147, in_queue=1235254, util=100.00% 00:24:54.183 nvme5n1: ios=45/10768, merge=0/0, ticks=716/1208242, in_queue=1208958, util=100.00% 00:24:54.183 nvme6n1: ios=0/8860, merge=0/0, ticks=0/1239339, in_queue=1239339, util=98.44% 00:24:54.183 nvme7n1: ios=0/9145, merge=0/0, ticks=0/1242290, in_queue=1242290, util=98.83% 00:24:54.183 nvme8n1: ios=0/9386, merge=0/0, ticks=0/1216516, in_queue=1216516, util=98.95% 00:24:54.183 nvme9n1: ios=48/9098, merge=0/0, ticks=2091/1231831, in_queue=1233922, util=100.00% 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:54.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:54.183 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.183 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:54.183 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:54.183 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:54.183 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:54.183 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:54.183 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:24:54.183 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:54.183 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:24:54.183 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:54.183 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:54.183 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.183 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.183 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.183 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.183 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:54.443 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:54.443 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:54.443 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:54.443 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:54.443 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:24:54.443 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:54.443 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:24:54.443 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:54.443 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:54.443 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.443 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.443 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.443 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.443 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:54.702 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:54.702 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:54.702 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:54.702 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:54.702 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:24:54.702 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:54.702 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:24:54.702 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:54.702 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:54.702 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.702 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.702 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.702 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.702 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:54.960 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:54.960 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.960 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:55.217 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:55.217 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:55.217 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:55.217 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:55.217 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:24:55.217 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:55.217 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:24:55.217 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:55.218 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.218 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:55.476 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:55.476 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:55.476 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:55.477 rmmod nvme_tcp 00:24:55.477 rmmod nvme_fabrics 00:24:55.477 rmmod nvme_keyring 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2319422 ']' 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2319422 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 2319422 ']' 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 2319422 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2319422 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2319422' 00:24:55.477 killing process with pid 2319422 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 2319422 00:24:55.477 02:00:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 2319422 00:24:56.045 02:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:56.045 02:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:56.045 02:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:56.045 02:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:56.045 02:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:56.045 02:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.045 02:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.045 02:00:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:58.585 00:24:58.585 real 1m0.170s 00:24:58.585 user 3m21.756s 00:24:58.585 sys 0m23.799s 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.585 ************************************ 00:24:58.585 END TEST nvmf_multiconnection 00:24:58.585 ************************************ 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:58.585 ************************************ 00:24:58.585 START TEST nvmf_initiator_timeout 00:24:58.585 ************************************ 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:58.585 * Looking for test storage... 00:24:58.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.585 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:24:58.586 02:00:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:00.489 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.489 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:00.489 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:00.490 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:00.490 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:00.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:25:00.490 00:25:00.490 --- 10.0.0.2 ping statistics --- 00:25:00.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.490 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:25:00.490 00:25:00.490 --- 10.0.0.1 ping statistics --- 00:25:00.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.490 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:00.490 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.491 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:00.491 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.491 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=2328805 00:25:00.491 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:00.491 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 2328805 00:25:00.491 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 2328805 ']' 00:25:00.491 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.491 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:00.491 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.491 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:00.491 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.491 [2024-07-26 02:00:42.326740] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:25:00.491 [2024-07-26 02:00:42.326811] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.491 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.491 [2024-07-26 02:00:42.395423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:00.491 [2024-07-26 02:00:42.488519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.491 [2024-07-26 02:00:42.488580] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.491 [2024-07-26 02:00:42.488596] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.491 [2024-07-26 02:00:42.488609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.491 [2024-07-26 02:00:42.488621] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.491 [2024-07-26 02:00:42.488694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.491 [2024-07-26 02:00:42.488728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.491 [2024-07-26 02:00:42.488847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:00.491 [2024-07-26 02:00:42.488850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.749 Malloc0 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.749 Delay0 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.749 [2024-07-26 02:00:42.658039] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.749 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.750 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:00.750 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.750 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.750 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.750 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:00.750 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.750 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.750 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.750 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.750 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.750 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.750 [2024-07-26 02:00:42.686387] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.750 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.750 02:00:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:01.687 02:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:01.687 02:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:25:01.688 02:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.688 02:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:01.688 02:00:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:25:03.615 02:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:03.615 02:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:03.615 02:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:03.615 02:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:03.615 02:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.615 02:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:25:03.615 02:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2329114 00:25:03.615 02:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:03.615 02:00:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:03.615 [global] 00:25:03.615 thread=1 00:25:03.615 invalidate=1 00:25:03.615 rw=write 00:25:03.615 time_based=1 00:25:03.615 runtime=60 00:25:03.615 ioengine=libaio 00:25:03.615 direct=1 00:25:03.615 bs=4096 00:25:03.615 iodepth=1 00:25:03.615 norandommap=0 00:25:03.615 numjobs=1 00:25:03.615 00:25:03.615 verify_dump=1 00:25:03.615 verify_backlog=512 00:25:03.615 verify_state_save=0 00:25:03.615 do_verify=1 00:25:03.615 verify=crc32c-intel 00:25:03.615 [job0] 00:25:03.615 filename=/dev/nvme0n1 00:25:03.615 Could not set queue depth (nvme0n1) 00:25:03.615 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:03.615 fio-3.35 00:25:03.615 Starting 1 thread 00:25:06.900 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:06.900 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.900 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:06.900 true 00:25:06.900 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.900 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:06.900 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.900 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:06.900 true 00:25:06.900 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.900 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:06.900 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.900 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:06.900 true 00:25:06.900 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.900 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:06.900 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.900 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:06.900 true 00:25:06.900 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.901 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:09.433 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:09.433 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.433 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:09.433 true 00:25:09.433 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.433 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:09.433 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.433 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:09.433 true 00:25:09.433 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.433 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:09.433 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.433 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:09.692 true 00:25:09.692 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.692 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:09.692 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.692 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:09.692 true 00:25:09.692 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.692 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:09.692 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2329114 00:26:05.904 00:26:05.904 job0: (groupid=0, jobs=1): err= 0: pid=2329301: Fri Jul 26 02:01:45 2024 00:26:05.904 read: IOPS=241, BW=964KiB/s (987kB/s)(56.5MiB/60032msec) 00:26:05.904 slat (usec): min=5, max=11585, avg=12.59, stdev=115.96 00:26:05.904 clat (usec): min=252, max=41152k, avg=3852.81, stdev=342135.11 00:26:05.904 lat (usec): min=258, max=41152k, avg=3865.40, stdev=342135.25 00:26:05.904 clat percentiles (usec): 00:26:05.904 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 306], 20.00th=[ 318], 00:26:05.904 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:26:05.904 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 388], 95.00th=[ 416], 00:26:05.904 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:26:05.904 | 99.99th=[42730] 00:26:05.904 write: IOPS=247, BW=989KiB/s (1013kB/s)(58.0MiB/60032msec); 0 zone resets 00:26:05.904 slat (usec): min=6, max=29263, avg=17.72, stdev=240.22 00:26:05.904 clat (usec): min=187, max=3720, avg=250.83, stdev=62.59 00:26:05.904 lat (usec): min=195, max=29662, avg=268.55, stdev=251.12 00:26:05.904 clat percentiles (usec): 00:26:05.904 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:26:05.904 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 241], 00:26:05.904 | 70.00th=[ 255], 80.00th=[ 285], 90.00th=[ 343], 95.00th=[ 383], 00:26:05.904 | 99.00th=[ 429], 99.50th=[ 441], 99.90th=[ 465], 99.95th=[ 469], 00:26:05.904 | 99.99th=[ 490] 00:26:05.904 bw ( KiB/s): min= 3264, max= 8192, per=100.00%, avg=5939.20, stdev=1423.01, samples=20 00:26:05.904 iops : min= 816, max= 2048, avg=1484.80, stdev=355.75, samples=20 00:26:05.904 lat (usec) : 250=33.93%, 500=64.84%, 750=0.42%, 1000=0.01% 00:26:05.904 lat (msec) : 4=0.01%, 50=0.79%, >=2000=0.01% 00:26:05.904 cpu : usr=0.50%, sys=0.90%, ctx=29324, majf=0, minf=2 00:26:05.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.904 issued rwts: total=14470,14848,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:05.904 00:26:05.904 Run status group 0 (all jobs): 00:26:05.904 READ: bw=964KiB/s (987kB/s), 964KiB/s-964KiB/s (987kB/s-987kB/s), io=56.5MiB (59.3MB), run=60032-60032msec 00:26:05.904 WRITE: bw=989KiB/s (1013kB/s), 989KiB/s-989KiB/s (1013kB/s-1013kB/s), io=58.0MiB (60.8MB), run=60032-60032msec 00:26:05.904 00:26:05.904 Disk stats (read/write): 00:26:05.904 nvme0n1: ios=14514/14848, merge=0/0, ticks=14708/3535, in_queue=18243, util=99.66% 00:26:05.904 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:05.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:05.904 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:05.904 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:26:05.904 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:05.904 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:05.904 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:05.905 nvmf hotplug test: fio successful as expected 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:05.905 rmmod nvme_tcp 00:26:05.905 rmmod nvme_fabrics 00:26:05.905 rmmod nvme_keyring 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 2328805 ']' 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 2328805 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 2328805 ']' 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 2328805 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:05.905 02:01:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2328805 00:26:05.905 02:01:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:05.905 02:01:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:05.905 02:01:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2328805' 00:26:05.905 killing process with pid 2328805 00:26:05.905 02:01:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 2328805 00:26:05.905 02:01:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 2328805 00:26:05.905 02:01:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:05.905 02:01:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:05.905 02:01:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:05.905 02:01:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:05.905 02:01:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:05.905 02:01:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.905 02:01:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.905 02:01:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.472 02:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:06.472 00:26:06.472 real 1m8.224s 00:26:06.472 user 4m7.927s 00:26:06.472 sys 0m8.513s 00:26:06.472 02:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:06.472 02:01:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.472 ************************************ 00:26:06.472 END TEST nvmf_initiator_timeout 00:26:06.472 ************************************ 00:26:06.472 02:01:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:26:06.472 02:01:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:26:06.472 02:01:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:26:06.473 02:01:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:26:06.473 02:01:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:08.375 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:08.375 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.375 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:08.375 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:08.376 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:08.376 ************************************ 00:26:08.376 START TEST nvmf_perf_adq 00:26:08.376 ************************************ 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:08.376 * Looking for test storage... 00:26:08.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:08.376 02:01:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:10.914 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:10.914 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:10.915 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:10.915 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:10.915 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:10.915 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:10.915 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:11.175 02:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:13.079 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:18.378 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:18.378 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:18.378 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.378 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:18.378 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:18.378 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:18.378 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.378 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:18.379 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:18.379 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:18.379 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:18.379 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.379 02:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.379 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.379 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.379 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:18.379 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.379 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.379 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.379 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:18.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:26:18.379 00:26:18.379 --- 10.0.0.2 ping statistics --- 00:26:18.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.379 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:26:18.379 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:26:18.379 00:26:18.380 --- 10.0.0.1 ping statistics --- 00:26:18.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.380 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2340695 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2340695 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2340695 ']' 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:18.380 [2024-07-26 02:02:00.144153] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:26:18.380 [2024-07-26 02:02:00.144231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.380 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.380 [2024-07-26 02:02:00.210877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:18.380 [2024-07-26 02:02:00.297419] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.380 [2024-07-26 02:02:00.297473] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.380 [2024-07-26 02:02:00.297497] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.380 [2024-07-26 02:02:00.297508] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.380 [2024-07-26 02:02:00.297517] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.380 [2024-07-26 02:02:00.297615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.380 [2024-07-26 02:02:00.297672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:18.380 [2024-07-26 02:02:00.297748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:18.380 [2024-07-26 02:02:00.297750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:18.380 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:18.638 [2024-07-26 02:02:00.516034] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:18.638 Malloc1 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:18.638 [2024-07-26 02:02:00.567181] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2340842 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:18.638 02:02:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:18.638 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.170 02:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:21.170 02:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.170 02:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:21.170 02:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.170 02:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:21.170 "tick_rate": 2700000000, 00:26:21.170 "poll_groups": [ 00:26:21.170 { 00:26:21.170 "name": "nvmf_tgt_poll_group_000", 00:26:21.170 "admin_qpairs": 1, 00:26:21.170 "io_qpairs": 1, 00:26:21.170 "current_admin_qpairs": 1, 00:26:21.170 "current_io_qpairs": 1, 00:26:21.170 "pending_bdev_io": 0, 00:26:21.170 "completed_nvme_io": 20929, 00:26:21.170 "transports": [ 00:26:21.170 { 00:26:21.170 "trtype": "TCP" 00:26:21.170 } 00:26:21.170 ] 00:26:21.170 }, 00:26:21.170 { 00:26:21.170 "name": "nvmf_tgt_poll_group_001", 00:26:21.170 "admin_qpairs": 0, 00:26:21.170 "io_qpairs": 1, 00:26:21.170 "current_admin_qpairs": 0, 00:26:21.170 "current_io_qpairs": 1, 00:26:21.170 "pending_bdev_io": 0, 00:26:21.170 "completed_nvme_io": 18582, 00:26:21.170 "transports": [ 00:26:21.170 { 00:26:21.170 "trtype": "TCP" 00:26:21.170 } 00:26:21.170 ] 00:26:21.170 }, 00:26:21.170 { 00:26:21.170 "name": "nvmf_tgt_poll_group_002", 00:26:21.170 "admin_qpairs": 0, 00:26:21.170 "io_qpairs": 1, 00:26:21.170 "current_admin_qpairs": 0, 00:26:21.170 "current_io_qpairs": 1, 00:26:21.170 "pending_bdev_io": 0, 00:26:21.170 "completed_nvme_io": 20545, 00:26:21.170 "transports": [ 00:26:21.170 { 00:26:21.170 "trtype": "TCP" 00:26:21.170 } 00:26:21.170 ] 00:26:21.170 }, 00:26:21.170 { 00:26:21.170 "name": "nvmf_tgt_poll_group_003", 00:26:21.170 "admin_qpairs": 0, 00:26:21.170 "io_qpairs": 1, 00:26:21.170 "current_admin_qpairs": 0, 00:26:21.170 "current_io_qpairs": 1, 00:26:21.170 "pending_bdev_io": 0, 00:26:21.170 "completed_nvme_io": 20570, 00:26:21.170 "transports": [ 00:26:21.170 { 00:26:21.170 "trtype": "TCP" 00:26:21.170 } 00:26:21.170 ] 00:26:21.170 } 00:26:21.170 ] 00:26:21.170 }' 00:26:21.170 02:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:21.170 02:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:21.170 02:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:21.170 02:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:21.170 02:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2340842 00:26:29.282 Initializing NVMe Controllers 00:26:29.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:29.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:29.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:29.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:29.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:29.282 Initialization complete. Launching workers. 00:26:29.282 ======================================================== 00:26:29.282 Latency(us) 00:26:29.282 Device Information : IOPS MiB/s Average min max 00:26:29.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10797.60 42.18 5928.34 3288.70 7689.04 00:26:29.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9715.30 37.95 6587.25 2342.46 10326.13 00:26:29.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10729.70 41.91 5965.80 2927.55 8655.21 00:26:29.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10950.40 42.77 5844.93 5054.55 8486.15 00:26:29.282 ======================================================== 00:26:29.282 Total : 42192.99 164.82 6067.94 2342.46 10326.13 00:26:29.282 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:29.282 rmmod nvme_tcp 00:26:29.282 rmmod nvme_fabrics 00:26:29.282 rmmod nvme_keyring 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2340695 ']' 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2340695 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2340695 ']' 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2340695 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2340695 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2340695' 00:26:29.282 killing process with pid 2340695 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2340695 00:26:29.282 02:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2340695 00:26:29.282 02:02:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:29.282 02:02:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:29.282 02:02:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:29.282 02:02:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:29.282 02:02:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:29.282 02:02:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.282 02:02:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.282 02:02:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.188 02:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:31.188 02:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:26:31.188 02:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:31.752 02:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:34.283 02:02:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:39.564 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:39.565 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:39.565 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:39.565 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:39.565 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:39.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:26:39.565 00:26:39.565 --- 10.0.0.2 ping statistics --- 00:26:39.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.565 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:26:39.565 00:26:39.565 --- 10.0.0.1 ping statistics --- 00:26:39.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.565 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:39.565 net.core.busy_poll = 1 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:39.565 net.core.busy_read = 1 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2343447 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2343447 00:26:39.565 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2343447 ']' 00:26:39.566 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.566 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:39.566 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.566 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:39.566 02:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.566 [2024-07-26 02:02:21.034767] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:26:39.566 [2024-07-26 02:02:21.034858] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.566 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.566 [2024-07-26 02:02:21.100370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:39.566 [2024-07-26 02:02:21.191771] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.566 [2024-07-26 02:02:21.191840] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.566 [2024-07-26 02:02:21.191863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.566 [2024-07-26 02:02:21.191874] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.566 [2024-07-26 02:02:21.191883] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.566 [2024-07-26 02:02:21.191962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.566 [2024-07-26 02:02:21.192028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.566 [2024-07-26 02:02:21.192098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:39.566 [2024-07-26 02:02:21.192102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.566 [2024-07-26 02:02:21.425939] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.566 Malloc1 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.566 [2024-07-26 02:02:21.479315] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2343483 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:39.566 02:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:39.566 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.097 02:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:42.097 02:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.097 02:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:42.097 02:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.097 02:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:42.097 "tick_rate": 2700000000, 00:26:42.097 "poll_groups": [ 00:26:42.097 { 00:26:42.097 "name": "nvmf_tgt_poll_group_000", 00:26:42.097 "admin_qpairs": 1, 00:26:42.097 "io_qpairs": 1, 00:26:42.097 "current_admin_qpairs": 1, 00:26:42.097 "current_io_qpairs": 1, 00:26:42.097 "pending_bdev_io": 0, 00:26:42.097 "completed_nvme_io": 24574, 00:26:42.097 "transports": [ 00:26:42.097 { 00:26:42.097 "trtype": "TCP" 00:26:42.097 } 00:26:42.097 ] 00:26:42.097 }, 00:26:42.097 { 00:26:42.097 "name": "nvmf_tgt_poll_group_001", 00:26:42.097 "admin_qpairs": 0, 00:26:42.097 "io_qpairs": 3, 00:26:42.097 "current_admin_qpairs": 0, 00:26:42.097 "current_io_qpairs": 3, 00:26:42.097 "pending_bdev_io": 0, 00:26:42.097 "completed_nvme_io": 25478, 00:26:42.097 "transports": [ 00:26:42.097 { 00:26:42.097 "trtype": "TCP" 00:26:42.097 } 00:26:42.097 ] 00:26:42.097 }, 00:26:42.097 { 00:26:42.097 "name": "nvmf_tgt_poll_group_002", 00:26:42.097 "admin_qpairs": 0, 00:26:42.097 "io_qpairs": 0, 00:26:42.097 "current_admin_qpairs": 0, 00:26:42.097 "current_io_qpairs": 0, 00:26:42.097 "pending_bdev_io": 0, 00:26:42.097 "completed_nvme_io": 0, 00:26:42.097 "transports": [ 00:26:42.097 { 00:26:42.097 "trtype": "TCP" 00:26:42.097 } 00:26:42.097 ] 00:26:42.097 }, 00:26:42.097 { 00:26:42.097 "name": "nvmf_tgt_poll_group_003", 00:26:42.097 "admin_qpairs": 0, 00:26:42.097 "io_qpairs": 0, 00:26:42.097 "current_admin_qpairs": 0, 00:26:42.097 "current_io_qpairs": 0, 00:26:42.097 "pending_bdev_io": 0, 00:26:42.097 "completed_nvme_io": 0, 00:26:42.097 "transports": [ 00:26:42.097 { 00:26:42.097 "trtype": "TCP" 00:26:42.097 } 00:26:42.097 ] 00:26:42.097 } 00:26:42.097 ] 00:26:42.097 }' 00:26:42.097 02:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:42.097 02:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:42.097 02:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:26:42.097 02:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:26:42.097 02:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2343483 00:26:50.220 Initializing NVMe Controllers 00:26:50.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:50.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:50.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:50.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:50.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:50.220 Initialization complete. Launching workers. 00:26:50.220 ======================================================== 00:26:50.220 Latency(us) 00:26:50.220 Device Information : IOPS MiB/s Average min max 00:26:50.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4604.50 17.99 13917.04 1857.52 64313.68 00:26:50.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4439.10 17.34 14464.72 2238.90 62491.46 00:26:50.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4420.60 17.27 14482.79 1878.56 59948.03 00:26:50.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13061.20 51.02 4899.87 1658.59 7783.05 00:26:50.220 ======================================================== 00:26:50.220 Total : 26525.39 103.61 9662.90 1658.59 64313.68 00:26:50.220 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:50.220 rmmod nvme_tcp 00:26:50.220 rmmod nvme_fabrics 00:26:50.220 rmmod nvme_keyring 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2343447 ']' 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2343447 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2343447 ']' 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2343447 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2343447 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2343447' 00:26:50.220 killing process with pid 2343447 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2343447 00:26:50.220 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2343447 00:26:50.220 02:02:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:50.220 02:02:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:50.220 02:02:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:50.220 02:02:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:50.220 02:02:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:50.220 02:02:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.220 02:02:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:50.220 02:02:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:53.552 00:26:53.552 real 0m44.775s 00:26:53.552 user 2m37.059s 00:26:53.552 sys 0m10.493s 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.552 ************************************ 00:26:53.552 END TEST nvmf_perf_adq 00:26:53.552 ************************************ 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:53.552 ************************************ 00:26:53.552 START TEST nvmf_shutdown 00:26:53.552 ************************************ 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:53.552 * Looking for test storage... 00:26:53.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.552 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:53.553 ************************************ 00:26:53.553 START TEST nvmf_shutdown_tc1 00:26:53.553 ************************************ 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:53.553 02:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:55.461 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:55.461 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:55.461 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:55.461 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:55.461 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:55.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:26:55.462 00:26:55.462 --- 10.0.0.2 ping statistics --- 00:26:55.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.462 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:26:55.462 00:26:55.462 --- 10.0.0.1 ping statistics --- 00:26:55.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.462 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2346778 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2346778 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2346778 ']' 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:55.462 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:55.462 [2024-07-26 02:02:37.306594] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:26:55.462 [2024-07-26 02:02:37.306672] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.462 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.462 [2024-07-26 02:02:37.372844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:55.462 [2024-07-26 02:02:37.462892] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.462 [2024-07-26 02:02:37.462953] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.462 [2024-07-26 02:02:37.462967] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.462 [2024-07-26 02:02:37.462978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.462 [2024-07-26 02:02:37.462988] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.462 [2024-07-26 02:02:37.463156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.462 [2024-07-26 02:02:37.463218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.462 [2024-07-26 02:02:37.463269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:55.462 [2024-07-26 02:02:37.463271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:55.721 [2024-07-26 02:02:37.617550] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.721 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:55.721 Malloc1 00:26:55.721 [2024-07-26 02:02:37.705692] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.981 Malloc2 00:26:55.981 Malloc3 00:26:55.981 Malloc4 00:26:55.981 Malloc5 00:26:55.981 Malloc6 00:26:55.981 Malloc7 00:26:56.240 Malloc8 00:26:56.240 Malloc9 00:26:56.240 Malloc10 00:26:56.240 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.240 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:56.240 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:56.240 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:56.240 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2346953 00:26:56.240 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2346953 /var/tmp/bdevperf.sock 00:26:56.240 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2346953 ']' 00:26:56.240 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:56.240 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:56.240 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:56.240 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:56.240 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:56.240 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:56.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:56.240 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:56.240 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.241 { 00:26:56.241 "params": { 00:26:56.241 "name": "Nvme$subsystem", 00:26:56.241 "trtype": "$TEST_TRANSPORT", 00:26:56.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.241 "adrfam": "ipv4", 00:26:56.241 "trsvcid": "$NVMF_PORT", 00:26:56.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.241 "hdgst": ${hdgst:-false}, 00:26:56.241 "ddgst": ${ddgst:-false} 00:26:56.241 }, 00:26:56.241 "method": "bdev_nvme_attach_controller" 00:26:56.241 } 00:26:56.241 EOF 00:26:56.241 )") 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.241 { 00:26:56.241 "params": { 00:26:56.241 "name": "Nvme$subsystem", 00:26:56.241 "trtype": "$TEST_TRANSPORT", 00:26:56.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.241 "adrfam": "ipv4", 00:26:56.241 "trsvcid": "$NVMF_PORT", 00:26:56.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.241 "hdgst": ${hdgst:-false}, 00:26:56.241 "ddgst": ${ddgst:-false} 00:26:56.241 }, 00:26:56.241 "method": "bdev_nvme_attach_controller" 00:26:56.241 } 00:26:56.241 EOF 00:26:56.241 )") 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.241 { 00:26:56.241 "params": { 00:26:56.241 "name": "Nvme$subsystem", 00:26:56.241 "trtype": "$TEST_TRANSPORT", 00:26:56.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.241 "adrfam": "ipv4", 00:26:56.241 "trsvcid": "$NVMF_PORT", 00:26:56.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.241 "hdgst": ${hdgst:-false}, 00:26:56.241 "ddgst": ${ddgst:-false} 00:26:56.241 }, 00:26:56.241 "method": "bdev_nvme_attach_controller" 00:26:56.241 } 00:26:56.241 EOF 00:26:56.241 )") 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.241 { 00:26:56.241 "params": { 00:26:56.241 "name": "Nvme$subsystem", 00:26:56.241 "trtype": "$TEST_TRANSPORT", 00:26:56.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.241 "adrfam": "ipv4", 00:26:56.241 "trsvcid": "$NVMF_PORT", 00:26:56.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.241 "hdgst": ${hdgst:-false}, 00:26:56.241 "ddgst": ${ddgst:-false} 00:26:56.241 }, 00:26:56.241 "method": "bdev_nvme_attach_controller" 00:26:56.241 } 00:26:56.241 EOF 00:26:56.241 )") 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.241 { 00:26:56.241 "params": { 00:26:56.241 "name": "Nvme$subsystem", 00:26:56.241 "trtype": "$TEST_TRANSPORT", 00:26:56.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.241 "adrfam": "ipv4", 00:26:56.241 "trsvcid": "$NVMF_PORT", 00:26:56.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.241 "hdgst": ${hdgst:-false}, 00:26:56.241 "ddgst": ${ddgst:-false} 00:26:56.241 }, 00:26:56.241 "method": "bdev_nvme_attach_controller" 00:26:56.241 } 00:26:56.241 EOF 00:26:56.241 )") 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.241 { 00:26:56.241 "params": { 00:26:56.241 "name": "Nvme$subsystem", 00:26:56.241 "trtype": "$TEST_TRANSPORT", 00:26:56.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.241 "adrfam": "ipv4", 00:26:56.241 "trsvcid": "$NVMF_PORT", 00:26:56.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.241 "hdgst": ${hdgst:-false}, 00:26:56.241 "ddgst": ${ddgst:-false} 00:26:56.241 }, 00:26:56.241 "method": "bdev_nvme_attach_controller" 00:26:56.241 } 00:26:56.241 EOF 00:26:56.241 )") 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.241 { 00:26:56.241 "params": { 00:26:56.241 "name": "Nvme$subsystem", 00:26:56.241 "trtype": "$TEST_TRANSPORT", 00:26:56.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.241 "adrfam": "ipv4", 00:26:56.241 "trsvcid": "$NVMF_PORT", 00:26:56.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.241 "hdgst": ${hdgst:-false}, 00:26:56.241 "ddgst": ${ddgst:-false} 00:26:56.241 }, 00:26:56.241 "method": "bdev_nvme_attach_controller" 00:26:56.241 } 00:26:56.241 EOF 00:26:56.241 )") 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.241 { 00:26:56.241 "params": { 00:26:56.241 "name": "Nvme$subsystem", 00:26:56.241 "trtype": "$TEST_TRANSPORT", 00:26:56.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.241 "adrfam": "ipv4", 00:26:56.241 "trsvcid": "$NVMF_PORT", 00:26:56.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.241 "hdgst": ${hdgst:-false}, 00:26:56.241 "ddgst": ${ddgst:-false} 00:26:56.241 }, 00:26:56.241 "method": "bdev_nvme_attach_controller" 00:26:56.241 } 00:26:56.241 EOF 00:26:56.241 )") 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.241 { 00:26:56.241 "params": { 00:26:56.241 "name": "Nvme$subsystem", 00:26:56.241 "trtype": "$TEST_TRANSPORT", 00:26:56.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.241 "adrfam": "ipv4", 00:26:56.241 "trsvcid": "$NVMF_PORT", 00:26:56.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.241 "hdgst": ${hdgst:-false}, 00:26:56.241 "ddgst": ${ddgst:-false} 00:26:56.241 }, 00:26:56.241 "method": "bdev_nvme_attach_controller" 00:26:56.241 } 00:26:56.241 EOF 00:26:56.241 )") 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.241 { 00:26:56.241 "params": { 00:26:56.241 "name": "Nvme$subsystem", 00:26:56.241 "trtype": "$TEST_TRANSPORT", 00:26:56.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.241 "adrfam": "ipv4", 00:26:56.241 "trsvcid": "$NVMF_PORT", 00:26:56.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.241 "hdgst": ${hdgst:-false}, 00:26:56.241 "ddgst": ${ddgst:-false} 00:26:56.241 }, 00:26:56.241 "method": "bdev_nvme_attach_controller" 00:26:56.241 } 00:26:56.241 EOF 00:26:56.241 )") 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:56.241 02:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:56.241 "params": { 00:26:56.241 "name": "Nvme1", 00:26:56.241 "trtype": "tcp", 00:26:56.242 "traddr": "10.0.0.2", 00:26:56.242 "adrfam": "ipv4", 00:26:56.242 "trsvcid": "4420", 00:26:56.242 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:56.242 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:56.242 "hdgst": false, 00:26:56.242 "ddgst": false 00:26:56.242 }, 00:26:56.242 "method": "bdev_nvme_attach_controller" 00:26:56.242 },{ 00:26:56.242 "params": { 00:26:56.242 "name": "Nvme2", 00:26:56.242 "trtype": "tcp", 00:26:56.242 "traddr": "10.0.0.2", 00:26:56.242 "adrfam": "ipv4", 00:26:56.242 "trsvcid": "4420", 00:26:56.242 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:56.242 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:56.242 "hdgst": false, 00:26:56.242 "ddgst": false 00:26:56.242 }, 00:26:56.242 "method": "bdev_nvme_attach_controller" 00:26:56.242 },{ 00:26:56.242 "params": { 00:26:56.242 "name": "Nvme3", 00:26:56.242 "trtype": "tcp", 00:26:56.242 "traddr": "10.0.0.2", 00:26:56.242 "adrfam": "ipv4", 00:26:56.242 "trsvcid": "4420", 00:26:56.242 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:56.242 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:56.242 "hdgst": false, 00:26:56.242 "ddgst": false 00:26:56.242 }, 00:26:56.242 "method": "bdev_nvme_attach_controller" 00:26:56.242 },{ 00:26:56.242 "params": { 00:26:56.242 "name": "Nvme4", 00:26:56.242 "trtype": "tcp", 00:26:56.242 "traddr": "10.0.0.2", 00:26:56.242 "adrfam": "ipv4", 00:26:56.242 "trsvcid": "4420", 00:26:56.242 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:56.242 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:56.242 "hdgst": false, 00:26:56.242 "ddgst": false 00:26:56.242 }, 00:26:56.242 "method": "bdev_nvme_attach_controller" 00:26:56.242 },{ 00:26:56.242 "params": { 00:26:56.242 "name": "Nvme5", 00:26:56.242 "trtype": "tcp", 00:26:56.242 "traddr": "10.0.0.2", 00:26:56.242 "adrfam": "ipv4", 00:26:56.242 "trsvcid": "4420", 00:26:56.242 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:56.242 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:56.242 "hdgst": false, 00:26:56.242 "ddgst": false 00:26:56.242 }, 00:26:56.242 "method": "bdev_nvme_attach_controller" 00:26:56.242 },{ 00:26:56.242 "params": { 00:26:56.242 "name": "Nvme6", 00:26:56.242 "trtype": "tcp", 00:26:56.242 "traddr": "10.0.0.2", 00:26:56.242 "adrfam": "ipv4", 00:26:56.242 "trsvcid": "4420", 00:26:56.242 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:56.242 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:56.242 "hdgst": false, 00:26:56.242 "ddgst": false 00:26:56.242 }, 00:26:56.242 "method": "bdev_nvme_attach_controller" 00:26:56.242 },{ 00:26:56.242 "params": { 00:26:56.242 "name": "Nvme7", 00:26:56.242 "trtype": "tcp", 00:26:56.242 "traddr": "10.0.0.2", 00:26:56.242 "adrfam": "ipv4", 00:26:56.242 "trsvcid": "4420", 00:26:56.242 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:56.242 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:56.242 "hdgst": false, 00:26:56.242 "ddgst": false 00:26:56.242 }, 00:26:56.242 "method": "bdev_nvme_attach_controller" 00:26:56.242 },{ 00:26:56.242 "params": { 00:26:56.242 "name": "Nvme8", 00:26:56.242 "trtype": "tcp", 00:26:56.242 "traddr": "10.0.0.2", 00:26:56.242 "adrfam": "ipv4", 00:26:56.242 "trsvcid": "4420", 00:26:56.242 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:56.242 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:56.242 "hdgst": false, 00:26:56.242 "ddgst": false 00:26:56.242 }, 00:26:56.242 "method": "bdev_nvme_attach_controller" 00:26:56.242 },{ 00:26:56.242 "params": { 00:26:56.242 "name": "Nvme9", 00:26:56.242 "trtype": "tcp", 00:26:56.242 "traddr": "10.0.0.2", 00:26:56.242 "adrfam": "ipv4", 00:26:56.242 "trsvcid": "4420", 00:26:56.242 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:56.242 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:56.242 "hdgst": false, 00:26:56.242 "ddgst": false 00:26:56.242 }, 00:26:56.242 "method": "bdev_nvme_attach_controller" 00:26:56.242 },{ 00:26:56.242 "params": { 00:26:56.242 "name": "Nvme10", 00:26:56.242 "trtype": "tcp", 00:26:56.242 "traddr": "10.0.0.2", 00:26:56.242 "adrfam": "ipv4", 00:26:56.242 "trsvcid": "4420", 00:26:56.242 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:56.242 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:56.242 "hdgst": false, 00:26:56.242 "ddgst": false 00:26:56.242 }, 00:26:56.242 "method": "bdev_nvme_attach_controller" 00:26:56.242 }' 00:26:56.242 [2024-07-26 02:02:38.222870] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:26:56.242 [2024-07-26 02:02:38.222955] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:56.500 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.500 [2024-07-26 02:02:38.289085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.500 [2024-07-26 02:02:38.376583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.402 02:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:58.402 02:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:58.402 02:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:58.402 02:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.402 02:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:58.402 02:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.402 02:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2346953 00:26:58.402 02:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:58.402 02:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:26:59.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2346953 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:59.335 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2346778 00:26:59.335 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:59.335 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:59.335 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:59.335 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:59.335 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.335 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.335 { 00:26:59.335 "params": { 00:26:59.335 "name": "Nvme$subsystem", 00:26:59.335 "trtype": "$TEST_TRANSPORT", 00:26:59.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.335 "adrfam": "ipv4", 00:26:59.335 "trsvcid": "$NVMF_PORT", 00:26:59.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.335 "hdgst": ${hdgst:-false}, 00:26:59.335 "ddgst": ${ddgst:-false} 00:26:59.335 }, 00:26:59.335 "method": "bdev_nvme_attach_controller" 00:26:59.335 } 00:26:59.335 EOF 00:26:59.335 )") 00:26:59.335 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.335 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.335 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.335 { 00:26:59.335 "params": { 00:26:59.335 "name": "Nvme$subsystem", 00:26:59.335 "trtype": "$TEST_TRANSPORT", 00:26:59.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.335 "adrfam": "ipv4", 00:26:59.335 "trsvcid": "$NVMF_PORT", 00:26:59.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.335 "hdgst": ${hdgst:-false}, 00:26:59.335 "ddgst": ${ddgst:-false} 00:26:59.335 }, 00:26:59.335 "method": "bdev_nvme_attach_controller" 00:26:59.335 } 00:26:59.335 EOF 00:26:59.335 )") 00:26:59.335 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.335 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.335 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.335 { 00:26:59.335 "params": { 00:26:59.335 "name": "Nvme$subsystem", 00:26:59.335 "trtype": "$TEST_TRANSPORT", 00:26:59.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.335 "adrfam": "ipv4", 00:26:59.335 "trsvcid": "$NVMF_PORT", 00:26:59.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.335 "hdgst": ${hdgst:-false}, 00:26:59.335 "ddgst": ${ddgst:-false} 00:26:59.335 }, 00:26:59.335 "method": "bdev_nvme_attach_controller" 00:26:59.335 } 00:26:59.335 EOF 00:26:59.335 )") 00:26:59.335 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.335 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.335 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.335 { 00:26:59.335 "params": { 00:26:59.335 "name": "Nvme$subsystem", 00:26:59.335 "trtype": "$TEST_TRANSPORT", 00:26:59.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.335 "adrfam": "ipv4", 00:26:59.335 "trsvcid": "$NVMF_PORT", 00:26:59.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.335 "hdgst": ${hdgst:-false}, 00:26:59.335 "ddgst": ${ddgst:-false} 00:26:59.335 }, 00:26:59.336 "method": "bdev_nvme_attach_controller" 00:26:59.336 } 00:26:59.336 EOF 00:26:59.336 )") 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.336 { 00:26:59.336 "params": { 00:26:59.336 "name": "Nvme$subsystem", 00:26:59.336 "trtype": "$TEST_TRANSPORT", 00:26:59.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.336 "adrfam": "ipv4", 00:26:59.336 "trsvcid": "$NVMF_PORT", 00:26:59.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.336 "hdgst": ${hdgst:-false}, 00:26:59.336 "ddgst": ${ddgst:-false} 00:26:59.336 }, 00:26:59.336 "method": "bdev_nvme_attach_controller" 00:26:59.336 } 00:26:59.336 EOF 00:26:59.336 )") 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.336 { 00:26:59.336 "params": { 00:26:59.336 "name": "Nvme$subsystem", 00:26:59.336 "trtype": "$TEST_TRANSPORT", 00:26:59.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.336 "adrfam": "ipv4", 00:26:59.336 "trsvcid": "$NVMF_PORT", 00:26:59.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.336 "hdgst": ${hdgst:-false}, 00:26:59.336 "ddgst": ${ddgst:-false} 00:26:59.336 }, 00:26:59.336 "method": "bdev_nvme_attach_controller" 00:26:59.336 } 00:26:59.336 EOF 00:26:59.336 )") 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.336 { 00:26:59.336 "params": { 00:26:59.336 "name": "Nvme$subsystem", 00:26:59.336 "trtype": "$TEST_TRANSPORT", 00:26:59.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.336 "adrfam": "ipv4", 00:26:59.336 "trsvcid": "$NVMF_PORT", 00:26:59.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.336 "hdgst": ${hdgst:-false}, 00:26:59.336 "ddgst": ${ddgst:-false} 00:26:59.336 }, 00:26:59.336 "method": "bdev_nvme_attach_controller" 00:26:59.336 } 00:26:59.336 EOF 00:26:59.336 )") 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.336 { 00:26:59.336 "params": { 00:26:59.336 "name": "Nvme$subsystem", 00:26:59.336 "trtype": "$TEST_TRANSPORT", 00:26:59.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.336 "adrfam": "ipv4", 00:26:59.336 "trsvcid": "$NVMF_PORT", 00:26:59.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.336 "hdgst": ${hdgst:-false}, 00:26:59.336 "ddgst": ${ddgst:-false} 00:26:59.336 }, 00:26:59.336 "method": "bdev_nvme_attach_controller" 00:26:59.336 } 00:26:59.336 EOF 00:26:59.336 )") 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.336 { 00:26:59.336 "params": { 00:26:59.336 "name": "Nvme$subsystem", 00:26:59.336 "trtype": "$TEST_TRANSPORT", 00:26:59.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.336 "adrfam": "ipv4", 00:26:59.336 "trsvcid": "$NVMF_PORT", 00:26:59.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.336 "hdgst": ${hdgst:-false}, 00:26:59.336 "ddgst": ${ddgst:-false} 00:26:59.336 }, 00:26:59.336 "method": "bdev_nvme_attach_controller" 00:26:59.336 } 00:26:59.336 EOF 00:26:59.336 )") 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.336 { 00:26:59.336 "params": { 00:26:59.336 "name": "Nvme$subsystem", 00:26:59.336 "trtype": "$TEST_TRANSPORT", 00:26:59.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.336 "adrfam": "ipv4", 00:26:59.336 "trsvcid": "$NVMF_PORT", 00:26:59.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.336 "hdgst": ${hdgst:-false}, 00:26:59.336 "ddgst": ${ddgst:-false} 00:26:59.336 }, 00:26:59.336 "method": "bdev_nvme_attach_controller" 00:26:59.336 } 00:26:59.336 EOF 00:26:59.336 )") 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:59.336 02:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:59.336 "params": { 00:26:59.336 "name": "Nvme1", 00:26:59.336 "trtype": "tcp", 00:26:59.336 "traddr": "10.0.0.2", 00:26:59.336 "adrfam": "ipv4", 00:26:59.336 "trsvcid": "4420", 00:26:59.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.336 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:59.336 "hdgst": false, 00:26:59.336 "ddgst": false 00:26:59.336 }, 00:26:59.336 "method": "bdev_nvme_attach_controller" 00:26:59.336 },{ 00:26:59.336 "params": { 00:26:59.336 "name": "Nvme2", 00:26:59.336 "trtype": "tcp", 00:26:59.336 "traddr": "10.0.0.2", 00:26:59.336 "adrfam": "ipv4", 00:26:59.336 "trsvcid": "4420", 00:26:59.336 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:59.336 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:59.336 "hdgst": false, 00:26:59.336 "ddgst": false 00:26:59.336 }, 00:26:59.336 "method": "bdev_nvme_attach_controller" 00:26:59.336 },{ 00:26:59.336 "params": { 00:26:59.336 "name": "Nvme3", 00:26:59.336 "trtype": "tcp", 00:26:59.336 "traddr": "10.0.0.2", 00:26:59.336 "adrfam": "ipv4", 00:26:59.336 "trsvcid": "4420", 00:26:59.336 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:59.336 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:59.336 "hdgst": false, 00:26:59.336 "ddgst": false 00:26:59.336 }, 00:26:59.336 "method": "bdev_nvme_attach_controller" 00:26:59.336 },{ 00:26:59.336 "params": { 00:26:59.336 "name": "Nvme4", 00:26:59.336 "trtype": "tcp", 00:26:59.336 "traddr": "10.0.0.2", 00:26:59.336 "adrfam": "ipv4", 00:26:59.336 "trsvcid": "4420", 00:26:59.336 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:59.336 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:59.336 "hdgst": false, 00:26:59.336 "ddgst": false 00:26:59.336 }, 00:26:59.336 "method": "bdev_nvme_attach_controller" 00:26:59.336 },{ 00:26:59.336 "params": { 00:26:59.336 "name": "Nvme5", 00:26:59.336 "trtype": "tcp", 00:26:59.336 "traddr": "10.0.0.2", 00:26:59.336 "adrfam": "ipv4", 00:26:59.336 "trsvcid": "4420", 00:26:59.336 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:59.336 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:59.336 "hdgst": false, 00:26:59.336 "ddgst": false 00:26:59.336 }, 00:26:59.336 "method": "bdev_nvme_attach_controller" 00:26:59.336 },{ 00:26:59.336 "params": { 00:26:59.336 "name": "Nvme6", 00:26:59.336 "trtype": "tcp", 00:26:59.336 "traddr": "10.0.0.2", 00:26:59.336 "adrfam": "ipv4", 00:26:59.336 "trsvcid": "4420", 00:26:59.336 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:59.336 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:59.336 "hdgst": false, 00:26:59.336 "ddgst": false 00:26:59.336 }, 00:26:59.336 "method": "bdev_nvme_attach_controller" 00:26:59.336 },{ 00:26:59.336 "params": { 00:26:59.336 "name": "Nvme7", 00:26:59.336 "trtype": "tcp", 00:26:59.336 "traddr": "10.0.0.2", 00:26:59.336 "adrfam": "ipv4", 00:26:59.336 "trsvcid": "4420", 00:26:59.336 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:59.336 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:59.336 "hdgst": false, 00:26:59.336 "ddgst": false 00:26:59.336 }, 00:26:59.336 "method": "bdev_nvme_attach_controller" 00:26:59.336 },{ 00:26:59.337 "params": { 00:26:59.337 "name": "Nvme8", 00:26:59.337 "trtype": "tcp", 00:26:59.337 "traddr": "10.0.0.2", 00:26:59.337 "adrfam": "ipv4", 00:26:59.337 "trsvcid": "4420", 00:26:59.337 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:59.337 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:59.337 "hdgst": false, 00:26:59.337 "ddgst": false 00:26:59.337 }, 00:26:59.337 "method": "bdev_nvme_attach_controller" 00:26:59.337 },{ 00:26:59.337 "params": { 00:26:59.337 "name": "Nvme9", 00:26:59.337 "trtype": "tcp", 00:26:59.337 "traddr": "10.0.0.2", 00:26:59.337 "adrfam": "ipv4", 00:26:59.337 "trsvcid": "4420", 00:26:59.337 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:59.337 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:59.337 "hdgst": false, 00:26:59.337 "ddgst": false 00:26:59.337 }, 00:26:59.337 "method": "bdev_nvme_attach_controller" 00:26:59.337 },{ 00:26:59.337 "params": { 00:26:59.337 "name": "Nvme10", 00:26:59.337 "trtype": "tcp", 00:26:59.337 "traddr": "10.0.0.2", 00:26:59.337 "adrfam": "ipv4", 00:26:59.337 "trsvcid": "4420", 00:26:59.337 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:59.337 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:59.337 "hdgst": false, 00:26:59.337 "ddgst": false 00:26:59.337 }, 00:26:59.337 "method": "bdev_nvme_attach_controller" 00:26:59.337 }' 00:26:59.337 [2024-07-26 02:02:41.301557] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:26:59.337 [2024-07-26 02:02:41.301645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2347375 ] 00:26:59.337 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.596 [2024-07-26 02:02:41.368112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.596 [2024-07-26 02:02:41.458586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.972 Running I/O for 1 seconds... 00:27:02.355 00:27:02.355 Latency(us) 00:27:02.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.355 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.355 Verification LBA range: start 0x0 length 0x400 00:27:02.355 Nvme1n1 : 1.12 230.70 14.42 0.00 0.00 273617.11 4805.97 257872.02 00:27:02.355 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.355 Verification LBA range: start 0x0 length 0x400 00:27:02.355 Nvme2n1 : 1.06 245.53 15.35 0.00 0.00 252270.84 5728.33 254765.13 00:27:02.355 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.355 Verification LBA range: start 0x0 length 0x400 00:27:02.355 Nvme3n1 : 1.06 188.41 11.78 0.00 0.00 320492.58 5218.61 264085.81 00:27:02.355 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.355 Verification LBA range: start 0x0 length 0x400 00:27:02.355 Nvme4n1 : 1.15 282.26 17.64 0.00 0.00 209411.87 13010.11 234570.33 00:27:02.355 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.355 Verification LBA range: start 0x0 length 0x400 00:27:02.355 Nvme5n1 : 1.15 222.25 13.89 0.00 0.00 266522.74 21845.33 254765.13 00:27:02.355 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.355 Verification LBA range: start 0x0 length 0x400 00:27:02.355 Nvme6n1 : 1.10 233.76 14.61 0.00 0.00 248083.72 21942.42 257872.02 00:27:02.355 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.355 Verification LBA range: start 0x0 length 0x400 00:27:02.355 Nvme7n1 : 1.17 217.94 13.62 0.00 0.00 263397.45 21845.33 284280.60 00:27:02.355 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.355 Verification LBA range: start 0x0 length 0x400 00:27:02.355 Nvme8n1 : 1.18 271.78 16.99 0.00 0.00 207622.75 16408.27 257872.02 00:27:02.355 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.355 Verification LBA range: start 0x0 length 0x400 00:27:02.355 Nvme9n1 : 1.17 219.64 13.73 0.00 0.00 252298.43 22039.51 290494.39 00:27:02.355 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.355 Verification LBA range: start 0x0 length 0x400 00:27:02.355 Nvme10n1 : 1.18 270.53 16.91 0.00 0.00 201819.63 15146.10 259425.47 00:27:02.355 =================================================================================================================== 00:27:02.355 Total : 2382.80 148.92 0.00 0.00 244962.35 4805.97 290494.39 00:27:02.355 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:02.356 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:02.356 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:02.356 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:02.356 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:02.356 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:02.356 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:02.356 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:02.356 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:02.356 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:02.356 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:02.356 rmmod nvme_tcp 00:27:02.614 rmmod nvme_fabrics 00:27:02.614 rmmod nvme_keyring 00:27:02.614 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:02.614 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:02.614 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:02.614 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2346778 ']' 00:27:02.614 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2346778 00:27:02.614 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2346778 ']' 00:27:02.614 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2346778 00:27:02.614 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:27:02.614 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:02.614 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2346778 00:27:02.614 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:02.614 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:02.614 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2346778' 00:27:02.614 killing process with pid 2346778 00:27:02.614 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2346778 00:27:02.614 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2346778 00:27:02.872 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:02.872 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:02.872 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:02.872 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:02.872 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:02.872 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.872 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.872 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:05.411 00:27:05.411 real 0m11.720s 00:27:05.411 user 0m33.984s 00:27:05.411 sys 0m3.293s 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:05.411 ************************************ 00:27:05.411 END TEST nvmf_shutdown_tc1 00:27:05.411 ************************************ 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:05.411 ************************************ 00:27:05.411 START TEST nvmf_shutdown_tc2 00:27:05.411 ************************************ 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:05.411 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:05.412 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:05.412 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:05.412 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:05.412 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.412 02:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:05.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:27:05.412 00:27:05.412 --- 10.0.0.2 ping statistics --- 00:27:05.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.412 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:27:05.412 00:27:05.412 --- 10.0.0.1 ping statistics --- 00:27:05.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.412 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2348143 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2348143 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2348143 ']' 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:05.412 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.412 [2024-07-26 02:02:47.191770] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:27:05.413 [2024-07-26 02:02:47.191836] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.413 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.413 [2024-07-26 02:02:47.258819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:05.413 [2024-07-26 02:02:47.350723] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.413 [2024-07-26 02:02:47.350782] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.413 [2024-07-26 02:02:47.350799] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.413 [2024-07-26 02:02:47.350812] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.413 [2024-07-26 02:02:47.350824] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.413 [2024-07-26 02:02:47.350924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.413 [2024-07-26 02:02:47.351021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.413 [2024-07-26 02:02:47.351084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:05.413 [2024-07-26 02:02:47.351088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.673 [2024-07-26 02:02:47.507527] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.673 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.673 Malloc1 00:27:05.673 [2024-07-26 02:02:47.586651] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.673 Malloc2 00:27:05.673 Malloc3 00:27:05.932 Malloc4 00:27:05.932 Malloc5 00:27:05.932 Malloc6 00:27:05.932 Malloc7 00:27:05.932 Malloc8 00:27:06.191 Malloc9 00:27:06.191 Malloc10 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2348323 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2348323 /var/tmp/bdevperf.sock 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2348323 ']' 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:06.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.191 { 00:27:06.191 "params": { 00:27:06.191 "name": "Nvme$subsystem", 00:27:06.191 "trtype": "$TEST_TRANSPORT", 00:27:06.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.191 "adrfam": "ipv4", 00:27:06.191 "trsvcid": "$NVMF_PORT", 00:27:06.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.191 "hdgst": ${hdgst:-false}, 00:27:06.191 "ddgst": ${ddgst:-false} 00:27:06.191 }, 00:27:06.191 "method": "bdev_nvme_attach_controller" 00:27:06.191 } 00:27:06.191 EOF 00:27:06.191 )") 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.191 { 00:27:06.191 "params": { 00:27:06.191 "name": "Nvme$subsystem", 00:27:06.191 "trtype": "$TEST_TRANSPORT", 00:27:06.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.191 "adrfam": "ipv4", 00:27:06.191 "trsvcid": "$NVMF_PORT", 00:27:06.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.191 "hdgst": ${hdgst:-false}, 00:27:06.191 "ddgst": ${ddgst:-false} 00:27:06.191 }, 00:27:06.191 "method": "bdev_nvme_attach_controller" 00:27:06.191 } 00:27:06.191 EOF 00:27:06.191 )") 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.191 { 00:27:06.191 "params": { 00:27:06.191 "name": "Nvme$subsystem", 00:27:06.191 "trtype": "$TEST_TRANSPORT", 00:27:06.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.191 "adrfam": "ipv4", 00:27:06.191 "trsvcid": "$NVMF_PORT", 00:27:06.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.191 "hdgst": ${hdgst:-false}, 00:27:06.191 "ddgst": ${ddgst:-false} 00:27:06.191 }, 00:27:06.191 "method": "bdev_nvme_attach_controller" 00:27:06.191 } 00:27:06.191 EOF 00:27:06.191 )") 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.191 { 00:27:06.191 "params": { 00:27:06.191 "name": "Nvme$subsystem", 00:27:06.191 "trtype": "$TEST_TRANSPORT", 00:27:06.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.191 "adrfam": "ipv4", 00:27:06.191 "trsvcid": "$NVMF_PORT", 00:27:06.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.191 "hdgst": ${hdgst:-false}, 00:27:06.191 "ddgst": ${ddgst:-false} 00:27:06.191 }, 00:27:06.191 "method": "bdev_nvme_attach_controller" 00:27:06.191 } 00:27:06.191 EOF 00:27:06.191 )") 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.191 { 00:27:06.191 "params": { 00:27:06.191 "name": "Nvme$subsystem", 00:27:06.191 "trtype": "$TEST_TRANSPORT", 00:27:06.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.191 "adrfam": "ipv4", 00:27:06.191 "trsvcid": "$NVMF_PORT", 00:27:06.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.191 "hdgst": ${hdgst:-false}, 00:27:06.191 "ddgst": ${ddgst:-false} 00:27:06.191 }, 00:27:06.191 "method": "bdev_nvme_attach_controller" 00:27:06.191 } 00:27:06.191 EOF 00:27:06.191 )") 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.191 { 00:27:06.191 "params": { 00:27:06.191 "name": "Nvme$subsystem", 00:27:06.191 "trtype": "$TEST_TRANSPORT", 00:27:06.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.191 "adrfam": "ipv4", 00:27:06.191 "trsvcid": "$NVMF_PORT", 00:27:06.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.191 "hdgst": ${hdgst:-false}, 00:27:06.191 "ddgst": ${ddgst:-false} 00:27:06.191 }, 00:27:06.191 "method": "bdev_nvme_attach_controller" 00:27:06.191 } 00:27:06.191 EOF 00:27:06.191 )") 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.191 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.191 { 00:27:06.191 "params": { 00:27:06.191 "name": "Nvme$subsystem", 00:27:06.191 "trtype": "$TEST_TRANSPORT", 00:27:06.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.191 "adrfam": "ipv4", 00:27:06.191 "trsvcid": "$NVMF_PORT", 00:27:06.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.192 "hdgst": ${hdgst:-false}, 00:27:06.192 "ddgst": ${ddgst:-false} 00:27:06.192 }, 00:27:06.192 "method": "bdev_nvme_attach_controller" 00:27:06.192 } 00:27:06.192 EOF 00:27:06.192 )") 00:27:06.192 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.192 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.192 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.192 { 00:27:06.192 "params": { 00:27:06.192 "name": "Nvme$subsystem", 00:27:06.192 "trtype": "$TEST_TRANSPORT", 00:27:06.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.192 "adrfam": "ipv4", 00:27:06.192 "trsvcid": "$NVMF_PORT", 00:27:06.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.192 "hdgst": ${hdgst:-false}, 00:27:06.192 "ddgst": ${ddgst:-false} 00:27:06.192 }, 00:27:06.192 "method": "bdev_nvme_attach_controller" 00:27:06.192 } 00:27:06.192 EOF 00:27:06.192 )") 00:27:06.192 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.192 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.192 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.192 { 00:27:06.192 "params": { 00:27:06.192 "name": "Nvme$subsystem", 00:27:06.192 "trtype": "$TEST_TRANSPORT", 00:27:06.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.192 "adrfam": "ipv4", 00:27:06.192 "trsvcid": "$NVMF_PORT", 00:27:06.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.192 "hdgst": ${hdgst:-false}, 00:27:06.192 "ddgst": ${ddgst:-false} 00:27:06.192 }, 00:27:06.192 "method": "bdev_nvme_attach_controller" 00:27:06.192 } 00:27:06.192 EOF 00:27:06.192 )") 00:27:06.192 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.192 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.192 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.192 { 00:27:06.192 "params": { 00:27:06.192 "name": "Nvme$subsystem", 00:27:06.192 "trtype": "$TEST_TRANSPORT", 00:27:06.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.192 "adrfam": "ipv4", 00:27:06.192 "trsvcid": "$NVMF_PORT", 00:27:06.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.192 "hdgst": ${hdgst:-false}, 00:27:06.192 "ddgst": ${ddgst:-false} 00:27:06.192 }, 00:27:06.192 "method": "bdev_nvme_attach_controller" 00:27:06.192 } 00:27:06.192 EOF 00:27:06.192 )") 00:27:06.192 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.192 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:06.192 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:06.192 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:06.192 "params": { 00:27:06.192 "name": "Nvme1", 00:27:06.192 "trtype": "tcp", 00:27:06.192 "traddr": "10.0.0.2", 00:27:06.192 "adrfam": "ipv4", 00:27:06.192 "trsvcid": "4420", 00:27:06.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:06.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:06.192 "hdgst": false, 00:27:06.192 "ddgst": false 00:27:06.192 }, 00:27:06.192 "method": "bdev_nvme_attach_controller" 00:27:06.192 },{ 00:27:06.192 "params": { 00:27:06.192 "name": "Nvme2", 00:27:06.192 "trtype": "tcp", 00:27:06.192 "traddr": "10.0.0.2", 00:27:06.192 "adrfam": "ipv4", 00:27:06.192 "trsvcid": "4420", 00:27:06.192 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:06.192 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:06.192 "hdgst": false, 00:27:06.192 "ddgst": false 00:27:06.192 }, 00:27:06.192 "method": "bdev_nvme_attach_controller" 00:27:06.192 },{ 00:27:06.192 "params": { 00:27:06.192 "name": "Nvme3", 00:27:06.192 "trtype": "tcp", 00:27:06.192 "traddr": "10.0.0.2", 00:27:06.192 "adrfam": "ipv4", 00:27:06.192 "trsvcid": "4420", 00:27:06.192 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:06.192 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:06.192 "hdgst": false, 00:27:06.192 "ddgst": false 00:27:06.192 }, 00:27:06.192 "method": "bdev_nvme_attach_controller" 00:27:06.192 },{ 00:27:06.192 "params": { 00:27:06.192 "name": "Nvme4", 00:27:06.192 "trtype": "tcp", 00:27:06.192 "traddr": "10.0.0.2", 00:27:06.192 "adrfam": "ipv4", 00:27:06.192 "trsvcid": "4420", 00:27:06.192 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:06.192 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:06.192 "hdgst": false, 00:27:06.192 "ddgst": false 00:27:06.192 }, 00:27:06.192 "method": "bdev_nvme_attach_controller" 00:27:06.192 },{ 00:27:06.192 "params": { 00:27:06.192 "name": "Nvme5", 00:27:06.192 "trtype": "tcp", 00:27:06.192 "traddr": "10.0.0.2", 00:27:06.192 "adrfam": "ipv4", 00:27:06.192 "trsvcid": "4420", 00:27:06.192 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:06.192 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:06.192 "hdgst": false, 00:27:06.192 "ddgst": false 00:27:06.192 }, 00:27:06.192 "method": "bdev_nvme_attach_controller" 00:27:06.192 },{ 00:27:06.192 "params": { 00:27:06.192 "name": "Nvme6", 00:27:06.192 "trtype": "tcp", 00:27:06.192 "traddr": "10.0.0.2", 00:27:06.192 "adrfam": "ipv4", 00:27:06.192 "trsvcid": "4420", 00:27:06.192 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:06.192 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:06.192 "hdgst": false, 00:27:06.192 "ddgst": false 00:27:06.192 }, 00:27:06.192 "method": "bdev_nvme_attach_controller" 00:27:06.192 },{ 00:27:06.192 "params": { 00:27:06.192 "name": "Nvme7", 00:27:06.192 "trtype": "tcp", 00:27:06.192 "traddr": "10.0.0.2", 00:27:06.192 "adrfam": "ipv4", 00:27:06.192 "trsvcid": "4420", 00:27:06.192 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:06.192 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:06.192 "hdgst": false, 00:27:06.192 "ddgst": false 00:27:06.192 }, 00:27:06.192 "method": "bdev_nvme_attach_controller" 00:27:06.192 },{ 00:27:06.192 "params": { 00:27:06.192 "name": "Nvme8", 00:27:06.192 "trtype": "tcp", 00:27:06.192 "traddr": "10.0.0.2", 00:27:06.192 "adrfam": "ipv4", 00:27:06.192 "trsvcid": "4420", 00:27:06.192 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:06.192 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:06.192 "hdgst": false, 00:27:06.192 "ddgst": false 00:27:06.192 }, 00:27:06.192 "method": "bdev_nvme_attach_controller" 00:27:06.192 },{ 00:27:06.192 "params": { 00:27:06.192 "name": "Nvme9", 00:27:06.192 "trtype": "tcp", 00:27:06.192 "traddr": "10.0.0.2", 00:27:06.192 "adrfam": "ipv4", 00:27:06.192 "trsvcid": "4420", 00:27:06.192 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:06.192 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:06.192 "hdgst": false, 00:27:06.192 "ddgst": false 00:27:06.192 }, 00:27:06.192 "method": "bdev_nvme_attach_controller" 00:27:06.192 },{ 00:27:06.192 "params": { 00:27:06.192 "name": "Nvme10", 00:27:06.192 "trtype": "tcp", 00:27:06.192 "traddr": "10.0.0.2", 00:27:06.192 "adrfam": "ipv4", 00:27:06.192 "trsvcid": "4420", 00:27:06.192 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:06.192 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:06.192 "hdgst": false, 00:27:06.192 "ddgst": false 00:27:06.192 }, 00:27:06.192 "method": "bdev_nvme_attach_controller" 00:27:06.192 }' 00:27:06.192 [2024-07-26 02:02:48.087624] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:27:06.192 [2024-07-26 02:02:48.087697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2348323 ] 00:27:06.192 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.192 [2024-07-26 02:02:48.151780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.451 [2024-07-26 02:02:48.238920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.355 Running I/O for 10 seconds... 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:08.355 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:08.614 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:08.614 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:08.614 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:08.614 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:08.614 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.614 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.614 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.614 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:08.614 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:08.614 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2348323 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2348323 ']' 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2348323 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2348323 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2348323' 00:27:08.874 killing process with pid 2348323 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2348323 00:27:08.874 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2348323 00:27:09.133 Received shutdown signal, test time was about 0.986378 seconds 00:27:09.133 00:27:09.133 Latency(us) 00:27:09.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.133 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.133 Verification LBA range: start 0x0 length 0x400 00:27:09.133 Nvme1n1 : 0.96 200.94 12.56 0.00 0.00 314271.92 26020.22 315349.52 00:27:09.133 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.133 Verification LBA range: start 0x0 length 0x400 00:27:09.133 Nvme2n1 : 0.97 198.00 12.38 0.00 0.00 313041.10 30292.20 295154.73 00:27:09.133 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.133 Verification LBA range: start 0x0 length 0x400 00:27:09.133 Nvme3n1 : 0.94 204.17 12.76 0.00 0.00 297321.05 17864.63 324670.20 00:27:09.133 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.133 Verification LBA range: start 0x0 length 0x400 00:27:09.133 Nvme4n1 : 0.98 195.45 12.22 0.00 0.00 305283.98 27962.03 315349.52 00:27:09.133 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.133 Verification LBA range: start 0x0 length 0x400 00:27:09.133 Nvme5n1 : 0.97 197.47 12.34 0.00 0.00 296015.14 20680.25 346418.44 00:27:09.133 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.133 Verification LBA range: start 0x0 length 0x400 00:27:09.133 Nvme6n1 : 0.98 196.17 12.26 0.00 0.00 291816.74 32234.00 330883.98 00:27:09.134 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.134 Verification LBA range: start 0x0 length 0x400 00:27:09.134 Nvme7n1 : 0.95 201.53 12.60 0.00 0.00 277160.64 23787.14 346418.44 00:27:09.134 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.134 Verification LBA range: start 0x0 length 0x400 00:27:09.134 Nvme8n1 : 0.96 199.13 12.45 0.00 0.00 274454.76 22719.15 299815.06 00:27:09.134 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.134 Verification LBA range: start 0x0 length 0x400 00:27:09.134 Nvme9n1 : 0.99 194.82 12.18 0.00 0.00 275755.11 19126.80 333990.87 00:27:09.134 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.134 Verification LBA range: start 0x0 length 0x400 00:27:09.134 Nvme10n1 : 0.94 136.74 8.55 0.00 0.00 379944.20 27962.03 365059.79 00:27:09.134 =================================================================================================================== 00:27:09.134 Total : 1924.43 120.28 0.00 0.00 299836.20 17864.63 365059.79 00:27:09.392 02:02:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2348143 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:10.326 rmmod nvme_tcp 00:27:10.326 rmmod nvme_fabrics 00:27:10.326 rmmod nvme_keyring 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2348143 ']' 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2348143 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2348143 ']' 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2348143 00:27:10.326 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:27:10.327 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:10.327 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2348143 00:27:10.327 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:10.327 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:10.327 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2348143' 00:27:10.327 killing process with pid 2348143 00:27:10.327 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2348143 00:27:10.327 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2348143 00:27:10.896 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:10.896 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:10.896 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:10.896 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:10.896 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:10.896 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.896 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.896 02:02:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.802 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:12.802 00:27:12.802 real 0m7.778s 00:27:12.802 user 0m23.785s 00:27:12.802 sys 0m1.501s 00:27:12.802 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:12.802 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.802 ************************************ 00:27:12.802 END TEST nvmf_shutdown_tc2 00:27:12.802 ************************************ 00:27:12.802 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:12.802 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:12.802 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:12.802 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:12.802 ************************************ 00:27:12.802 START TEST nvmf_shutdown_tc3 00:27:12.802 ************************************ 00:27:12.802 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:12.803 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:13.065 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:13.065 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:13.065 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:13.065 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:13.066 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:13.066 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:13.066 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:13.066 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:13.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:27:13.066 00:27:13.066 --- 10.0.0.2 ping statistics --- 00:27:13.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.066 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:27:13.066 00:27:13.066 --- 10.0.0.1 ping statistics --- 00:27:13.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.066 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2349238 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2349238 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2349238 ']' 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:13.066 02:02:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:13.066 [2024-07-26 02:02:55.026561] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:27:13.066 [2024-07-26 02:02:55.026644] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.066 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.325 [2024-07-26 02:02:55.097899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:13.325 [2024-07-26 02:02:55.188905] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.325 [2024-07-26 02:02:55.188965] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.325 [2024-07-26 02:02:55.188981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.325 [2024-07-26 02:02:55.188995] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.325 [2024-07-26 02:02:55.189007] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.325 [2024-07-26 02:02:55.189090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.325 [2024-07-26 02:02:55.189215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:13.325 [2024-07-26 02:02:55.189283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:13.325 [2024-07-26 02:02:55.189286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.325 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:13.325 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:27:13.325 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:13.325 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:13.325 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:13.583 [2024-07-26 02:02:55.348539] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.583 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.584 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.584 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.584 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:13.584 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.584 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:13.584 Malloc1 00:27:13.584 [2024-07-26 02:02:55.437874] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:13.584 Malloc2 00:27:13.584 Malloc3 00:27:13.584 Malloc4 00:27:13.841 Malloc5 00:27:13.841 Malloc6 00:27:13.841 Malloc7 00:27:13.841 Malloc8 00:27:13.841 Malloc9 00:27:14.100 Malloc10 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2349330 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2349330 /var/tmp/bdevperf.sock 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2349330 ']' 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:14.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:14.100 { 00:27:14.100 "params": { 00:27:14.100 "name": "Nvme$subsystem", 00:27:14.100 "trtype": "$TEST_TRANSPORT", 00:27:14.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.100 "adrfam": "ipv4", 00:27:14.100 "trsvcid": "$NVMF_PORT", 00:27:14.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.100 "hdgst": ${hdgst:-false}, 00:27:14.100 "ddgst": ${ddgst:-false} 00:27:14.100 }, 00:27:14.100 "method": "bdev_nvme_attach_controller" 00:27:14.100 } 00:27:14.100 EOF 00:27:14.100 )") 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:14.100 { 00:27:14.100 "params": { 00:27:14.100 "name": "Nvme$subsystem", 00:27:14.100 "trtype": "$TEST_TRANSPORT", 00:27:14.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.100 "adrfam": "ipv4", 00:27:14.100 "trsvcid": "$NVMF_PORT", 00:27:14.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.100 "hdgst": ${hdgst:-false}, 00:27:14.100 "ddgst": ${ddgst:-false} 00:27:14.100 }, 00:27:14.100 "method": "bdev_nvme_attach_controller" 00:27:14.100 } 00:27:14.100 EOF 00:27:14.100 )") 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:14.100 { 00:27:14.100 "params": { 00:27:14.100 "name": "Nvme$subsystem", 00:27:14.100 "trtype": "$TEST_TRANSPORT", 00:27:14.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.100 "adrfam": "ipv4", 00:27:14.100 "trsvcid": "$NVMF_PORT", 00:27:14.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.100 "hdgst": ${hdgst:-false}, 00:27:14.100 "ddgst": ${ddgst:-false} 00:27:14.100 }, 00:27:14.100 "method": "bdev_nvme_attach_controller" 00:27:14.100 } 00:27:14.100 EOF 00:27:14.100 )") 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:14.100 { 00:27:14.100 "params": { 00:27:14.100 "name": "Nvme$subsystem", 00:27:14.100 "trtype": "$TEST_TRANSPORT", 00:27:14.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.100 "adrfam": "ipv4", 00:27:14.100 "trsvcid": "$NVMF_PORT", 00:27:14.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.100 "hdgst": ${hdgst:-false}, 00:27:14.100 "ddgst": ${ddgst:-false} 00:27:14.100 }, 00:27:14.100 "method": "bdev_nvme_attach_controller" 00:27:14.100 } 00:27:14.100 EOF 00:27:14.100 )") 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:14.100 { 00:27:14.100 "params": { 00:27:14.100 "name": "Nvme$subsystem", 00:27:14.100 "trtype": "$TEST_TRANSPORT", 00:27:14.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.100 "adrfam": "ipv4", 00:27:14.100 "trsvcid": "$NVMF_PORT", 00:27:14.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.100 "hdgst": ${hdgst:-false}, 00:27:14.100 "ddgst": ${ddgst:-false} 00:27:14.100 }, 00:27:14.100 "method": "bdev_nvme_attach_controller" 00:27:14.100 } 00:27:14.100 EOF 00:27:14.100 )") 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:14.100 { 00:27:14.100 "params": { 00:27:14.100 "name": "Nvme$subsystem", 00:27:14.100 "trtype": "$TEST_TRANSPORT", 00:27:14.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.100 "adrfam": "ipv4", 00:27:14.100 "trsvcid": "$NVMF_PORT", 00:27:14.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.100 "hdgst": ${hdgst:-false}, 00:27:14.100 "ddgst": ${ddgst:-false} 00:27:14.100 }, 00:27:14.100 "method": "bdev_nvme_attach_controller" 00:27:14.100 } 00:27:14.100 EOF 00:27:14.100 )") 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:14.100 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:14.101 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:14.101 { 00:27:14.101 "params": { 00:27:14.101 "name": "Nvme$subsystem", 00:27:14.101 "trtype": "$TEST_TRANSPORT", 00:27:14.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.101 "adrfam": "ipv4", 00:27:14.101 "trsvcid": "$NVMF_PORT", 00:27:14.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.101 "hdgst": ${hdgst:-false}, 00:27:14.101 "ddgst": ${ddgst:-false} 00:27:14.101 }, 00:27:14.101 "method": "bdev_nvme_attach_controller" 00:27:14.101 } 00:27:14.101 EOF 00:27:14.101 )") 00:27:14.101 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:14.101 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:14.101 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:14.101 { 00:27:14.101 "params": { 00:27:14.101 "name": "Nvme$subsystem", 00:27:14.101 "trtype": "$TEST_TRANSPORT", 00:27:14.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.101 "adrfam": "ipv4", 00:27:14.101 "trsvcid": "$NVMF_PORT", 00:27:14.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.101 "hdgst": ${hdgst:-false}, 00:27:14.101 "ddgst": ${ddgst:-false} 00:27:14.101 }, 00:27:14.101 "method": "bdev_nvme_attach_controller" 00:27:14.101 } 00:27:14.101 EOF 00:27:14.101 )") 00:27:14.101 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:14.101 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:14.101 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:14.101 { 00:27:14.101 "params": { 00:27:14.101 "name": "Nvme$subsystem", 00:27:14.101 "trtype": "$TEST_TRANSPORT", 00:27:14.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.101 "adrfam": "ipv4", 00:27:14.101 "trsvcid": "$NVMF_PORT", 00:27:14.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.101 "hdgst": ${hdgst:-false}, 00:27:14.101 "ddgst": ${ddgst:-false} 00:27:14.101 }, 00:27:14.101 "method": "bdev_nvme_attach_controller" 00:27:14.101 } 00:27:14.101 EOF 00:27:14.101 )") 00:27:14.101 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:14.101 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:14.101 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:14.101 { 00:27:14.101 "params": { 00:27:14.101 "name": "Nvme$subsystem", 00:27:14.101 "trtype": "$TEST_TRANSPORT", 00:27:14.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.101 "adrfam": "ipv4", 00:27:14.101 "trsvcid": "$NVMF_PORT", 00:27:14.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.101 "hdgst": ${hdgst:-false}, 00:27:14.101 "ddgst": ${ddgst:-false} 00:27:14.101 }, 00:27:14.101 "method": "bdev_nvme_attach_controller" 00:27:14.101 } 00:27:14.101 EOF 00:27:14.101 )") 00:27:14.101 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:14.101 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:14.101 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:14.101 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:14.101 "params": { 00:27:14.101 "name": "Nvme1", 00:27:14.101 "trtype": "tcp", 00:27:14.101 "traddr": "10.0.0.2", 00:27:14.101 "adrfam": "ipv4", 00:27:14.101 "trsvcid": "4420", 00:27:14.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:14.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:14.101 "hdgst": false, 00:27:14.101 "ddgst": false 00:27:14.101 }, 00:27:14.101 "method": "bdev_nvme_attach_controller" 00:27:14.101 },{ 00:27:14.101 "params": { 00:27:14.101 "name": "Nvme2", 00:27:14.101 "trtype": "tcp", 00:27:14.101 "traddr": "10.0.0.2", 00:27:14.101 "adrfam": "ipv4", 00:27:14.101 "trsvcid": "4420", 00:27:14.101 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:14.101 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:14.101 "hdgst": false, 00:27:14.101 "ddgst": false 00:27:14.101 }, 00:27:14.101 "method": "bdev_nvme_attach_controller" 00:27:14.101 },{ 00:27:14.101 "params": { 00:27:14.101 "name": "Nvme3", 00:27:14.101 "trtype": "tcp", 00:27:14.101 "traddr": "10.0.0.2", 00:27:14.101 "adrfam": "ipv4", 00:27:14.101 "trsvcid": "4420", 00:27:14.101 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:14.101 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:14.101 "hdgst": false, 00:27:14.101 "ddgst": false 00:27:14.101 }, 00:27:14.101 "method": "bdev_nvme_attach_controller" 00:27:14.101 },{ 00:27:14.101 "params": { 00:27:14.101 "name": "Nvme4", 00:27:14.101 "trtype": "tcp", 00:27:14.101 "traddr": "10.0.0.2", 00:27:14.101 "adrfam": "ipv4", 00:27:14.101 "trsvcid": "4420", 00:27:14.101 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:14.101 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:14.101 "hdgst": false, 00:27:14.101 "ddgst": false 00:27:14.101 }, 00:27:14.101 "method": "bdev_nvme_attach_controller" 00:27:14.101 },{ 00:27:14.101 "params": { 00:27:14.101 "name": "Nvme5", 00:27:14.101 "trtype": "tcp", 00:27:14.101 "traddr": "10.0.0.2", 00:27:14.101 "adrfam": "ipv4", 00:27:14.101 "trsvcid": "4420", 00:27:14.101 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:14.101 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:14.101 "hdgst": false, 00:27:14.101 "ddgst": false 00:27:14.101 }, 00:27:14.101 "method": "bdev_nvme_attach_controller" 00:27:14.101 },{ 00:27:14.101 "params": { 00:27:14.101 "name": "Nvme6", 00:27:14.101 "trtype": "tcp", 00:27:14.101 "traddr": "10.0.0.2", 00:27:14.101 "adrfam": "ipv4", 00:27:14.101 "trsvcid": "4420", 00:27:14.101 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:14.101 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:14.101 "hdgst": false, 00:27:14.101 "ddgst": false 00:27:14.101 }, 00:27:14.101 "method": "bdev_nvme_attach_controller" 00:27:14.101 },{ 00:27:14.101 "params": { 00:27:14.101 "name": "Nvme7", 00:27:14.101 "trtype": "tcp", 00:27:14.101 "traddr": "10.0.0.2", 00:27:14.101 "adrfam": "ipv4", 00:27:14.101 "trsvcid": "4420", 00:27:14.101 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:14.101 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:14.101 "hdgst": false, 00:27:14.101 "ddgst": false 00:27:14.101 }, 00:27:14.101 "method": "bdev_nvme_attach_controller" 00:27:14.101 },{ 00:27:14.101 "params": { 00:27:14.101 "name": "Nvme8", 00:27:14.101 "trtype": "tcp", 00:27:14.101 "traddr": "10.0.0.2", 00:27:14.101 "adrfam": "ipv4", 00:27:14.101 "trsvcid": "4420", 00:27:14.101 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:14.101 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:14.101 "hdgst": false, 00:27:14.101 "ddgst": false 00:27:14.101 }, 00:27:14.101 "method": "bdev_nvme_attach_controller" 00:27:14.101 },{ 00:27:14.101 "params": { 00:27:14.101 "name": "Nvme9", 00:27:14.101 "trtype": "tcp", 00:27:14.101 "traddr": "10.0.0.2", 00:27:14.101 "adrfam": "ipv4", 00:27:14.101 "trsvcid": "4420", 00:27:14.101 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:14.101 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:14.101 "hdgst": false, 00:27:14.102 "ddgst": false 00:27:14.102 }, 00:27:14.102 "method": "bdev_nvme_attach_controller" 00:27:14.102 },{ 00:27:14.102 "params": { 00:27:14.102 "name": "Nvme10", 00:27:14.102 "trtype": "tcp", 00:27:14.102 "traddr": "10.0.0.2", 00:27:14.102 "adrfam": "ipv4", 00:27:14.102 "trsvcid": "4420", 00:27:14.102 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:14.102 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:14.102 "hdgst": false, 00:27:14.102 "ddgst": false 00:27:14.102 }, 00:27:14.102 "method": "bdev_nvme_attach_controller" 00:27:14.102 }' 00:27:14.102 [2024-07-26 02:02:55.945181] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:27:14.102 [2024-07-26 02:02:55.945258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2349330 ] 00:27:14.102 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.102 [2024-07-26 02:02:56.011811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.102 [2024-07-26 02:02:56.098710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.475 Running I/O for 10 seconds... 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:16.041 02:02:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:16.305 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:16.305 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:16.305 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:16.305 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2349238 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2349238 ']' 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2349238 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2349238 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2349238' 00:27:16.306 killing process with pid 2349238 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2349238 00:27:16.306 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2349238 00:27:16.306 [2024-07-26 02:02:58.277571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.277715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.277768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.277784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.277797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.277810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.277825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.277847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.277860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.277873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.277886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.277899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.277939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.277978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.277996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.278789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5910 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.280007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1c00 is same with the state(5) to be set 00:27:16.306 [2024-07-26 02:02:58.280043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1c00 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.280072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1c00 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.281991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.282003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.282016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.282028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.282040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.282053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f5dd0 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.283989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.284001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.307 [2024-07-26 02:02:58.284014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.284555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6290 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.285838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.285872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.285889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.285902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.285915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.285932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.285947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.285960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.285973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.285985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.285997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.308 [2024-07-26 02:02:58.286367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.286677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6770 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.287995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.309 [2024-07-26 02:02:58.288264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.288276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.288288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.288301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.288313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.288324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0400 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.289991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.290003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.290014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.310 [2024-07-26 02:02:58.290026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.290038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.290050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.290069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.290083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.290095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a08e0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.291997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.292009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.292021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.292033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.292045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.292063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.292077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.292090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.292106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.292119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.292131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0da0 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.293948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.293975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.293991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.294004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.294016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.294028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.294018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:1[2024-07-26 02:02:58.294042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.311 the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.294071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.294078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.311 [2024-07-26 02:02:58.294085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.294097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.294109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.311 [2024-07-26 02:02:58.294111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 [2024-07-26 02:02:58.294124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 [2024-07-26 02:02:58.294137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 [2024-07-26 02:02:58.294150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 [2024-07-26 02:02:58.294162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:1[2024-07-26 02:02:58.294177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 02:02:58.294192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 [2024-07-26 02:02:58.294224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 [2024-07-26 02:02:58.294238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 [2024-07-26 02:02:58.294251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 02:02:58.294264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 [2024-07-26 02:02:58.294289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 [2024-07-26 02:02:58.294302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 [2024-07-26 02:02:58.294315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 [2024-07-26 02:02:58.294327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 [2024-07-26 02:02:58.294360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 [2024-07-26 02:02:58.294374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 [2024-07-26 02:02:58.294388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with [2024-07-26 02:02:58.294393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:16.312 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 [2024-07-26 02:02:58.294407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 [2024-07-26 02:02:58.294421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 [2024-07-26 02:02:58.294433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 [2024-07-26 02:02:58.294445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 02:02:58.294458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 [2024-07-26 02:02:58.294484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 [2024-07-26 02:02:58.294496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 [2024-07-26 02:02:58.294509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 [2024-07-26 02:02:58.294522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 [2024-07-26 02:02:58.294548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 [2024-07-26 02:02:58.294560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 [2024-07-26 02:02:58.294573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 [2024-07-26 02:02:58.294589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:1[2024-07-26 02:02:58.294601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with [2024-07-26 02:02:58.294616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:16.312 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 [2024-07-26 02:02:58.294630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 [2024-07-26 02:02:58.294643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 [2024-07-26 02:02:58.294655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:1[2024-07-26 02:02:58.294668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with [2024-07-26 02:02:58.294682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:16.312 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.312 [2024-07-26 02:02:58.294695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.312 [2024-07-26 02:02:58.294700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.312 [2024-07-26 02:02:58.294708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.313 [2024-07-26 02:02:58.294715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.294721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.313 [2024-07-26 02:02:58.294731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:1[2024-07-26 02:02:58.294733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 the state(5) to be set 00:27:16.313 [2024-07-26 02:02:58.294747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with [2024-07-26 02:02:58.294747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:16.313 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.294762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.313 [2024-07-26 02:02:58.294766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.294774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.313 [2024-07-26 02:02:58.294784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.294787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.313 [2024-07-26 02:02:58.294800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.313 [2024-07-26 02:02:58.294801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.294811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:27:16.313 [2024-07-26 02:02:58.294815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.294832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.294845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.294861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.294875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.294891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.294905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.294920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.294934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.294949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.294963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.294979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.294992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1720 is same with the state(5) to be set 00:27:16.313 [2024-07-26 02:02:58.295541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 02:02:58.295553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1720 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 the state(5) to be set 00:27:16.313 [2024-07-26 02:02:58.295571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1720 is same with [2024-07-26 02:02:58.295573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:12the state(5) to be set 00:27:16.313 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.313 [2024-07-26 02:02:58.295741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.313 [2024-07-26 02:02:58.295756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.314 [2024-07-26 02:02:58.295770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.295785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.314 [2024-07-26 02:02:58.295799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.295815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.314 [2024-07-26 02:02:58.295829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.295848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.314 [2024-07-26 02:02:58.295862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.295877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.314 [2024-07-26 02:02:58.295891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.295907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.314 [2024-07-26 02:02:58.295921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.295936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.314 [2024-07-26 02:02:58.295950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.295966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.314 [2024-07-26 02:02:58.295980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.295995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.314 [2024-07-26 02:02:58.296008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.296024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.314 [2024-07-26 02:02:58.296037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.296056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.314 [2024-07-26 02:02:58.296077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.296123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.314 [2024-07-26 02:02:58.296636] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1482b90 was disconnected and freed. reset controller. 00:27:16.314 [2024-07-26 02:02:58.296723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.296745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.296761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.296775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.296789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.296803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.296817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.296835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.296849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b05f0 is same with the state(5) to be set 00:27:16.314 [2024-07-26 02:02:58.296901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.296921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.296936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.296950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.296964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.296977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.296992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140a440 is same with the state(5) to be set 00:27:16.314 [2024-07-26 02:02:58.297086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1d10 is same with the state(5) to be set 00:27:16.314 [2024-07-26 02:02:58.297253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2580 is same with the state(5) to be set 00:27:16.314 [2024-07-26 02:02:58.297433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd610 is same with the state(5) to be set 00:27:16.314 [2024-07-26 02:02:58.297594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.314 [2024-07-26 02:02:58.297710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0230 is same with the state(5) to be set 00:27:16.314 [2024-07-26 02:02:58.297754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.314 [2024-07-26 02:02:58.297774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.315 [2024-07-26 02:02:58.297789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.315 [2024-07-26 02:02:58.297803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.315 [2024-07-26 02:02:58.297831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.315 [2024-07-26 02:02:58.297845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.315 [2024-07-26 02:02:58.297858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.315 [2024-07-26 02:02:58.297872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.315 [2024-07-26 02:02:58.297885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0e50 is same with the state(5) to be set 00:27:16.315 [2024-07-26 02:02:58.297929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.316 [2024-07-26 02:02:58.297949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.297964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.316 [2024-07-26 02:02:58.297985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.298000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.316 [2024-07-26 02:02:58.298013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.298027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.316 [2024-07-26 02:02:58.298040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.298055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309700 is same with the state(5) to be set 00:27:16.316 [2024-07-26 02:02:58.298109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.316 [2024-07-26 02:02:58.298129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.298144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.316 [2024-07-26 02:02:58.298158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.298171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.316 [2024-07-26 02:02:58.298184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.298199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.316 [2024-07-26 02:02:58.298212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.298224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5290 is same with the state(5) to be set 00:27:16.316 [2024-07-26 02:02:58.298267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.316 [2024-07-26 02:02:58.298287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.298302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.316 [2024-07-26 02:02:58.298323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.298337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.316 [2024-07-26 02:02:58.298351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.298365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.316 [2024-07-26 02:02:58.298384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.298397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311760 is same with the state(5) to be set 00:27:16.316 [2024-07-26 02:02:58.299464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.299972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.299987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.316 [2024-07-26 02:02:58.300001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.316 [2024-07-26 02:02:58.300016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.300984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.300998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.301018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.301033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.301056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.301078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.301094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.301107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.301123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.301137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.301153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.301167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.301183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.301197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.317 [2024-07-26 02:02:58.301213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.317 [2024-07-26 02:02:58.301226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 02:02:58.301242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 02:02:58.301255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 02:02:58.301271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 02:02:58.301288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 02:02:58.301304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 02:02:58.301318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 02:02:58.301333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 02:02:58.301347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 02:02:58.301363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 02:02:58.301377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 02:02:58.301392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.318 [2024-07-26 02:02:58.301406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.318 [2024-07-26 02:02:58.301439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.318 [2024-07-26 02:02:58.301508] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13845e0 was disconnected and freed. reset controller. 00:27:16.318 [2024-07-26 02:02:58.304369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:16.318 [2024-07-26 02:02:58.304418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a2580 (9): Bad file descriptor 00:27:16.318 [2024-07-26 02:02:58.305308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:16.318 [2024-07-26 02:02:58.305344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a1d10 (9): Bad file descriptor 00:27:16.318 [2024-07-26 02:02:58.306397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.318 [2024-07-26 02:02:58.306441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a2580 with addr=10.0.0.2, port=4420 00:27:16.318 [2024-07-26 02:02:58.306460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2580 is same with the state(5) to be set 00:27:16.318 [2024-07-26 02:02:58.306533] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:16.318 [2024-07-26 02:02:58.306616] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:16.318 [2024-07-26 02:02:58.306681] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:16.318 [2024-07-26 02:02:58.306747] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:16.318 [2024-07-26 02:02:58.306836] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:16.318 [2024-07-26 02:02:58.306951] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:16.318 [2024-07-26 02:02:58.307070] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:16.318 [2024-07-26 02:02:58.307154] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:16.318 [2024-07-26 02:02:58.307310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.318 [2024-07-26 02:02:58.307338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a1d10 with addr=10.0.0.2, port=4420 00:27:16.318 [2024-07-26 02:02:58.307363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1d10 is same with the state(5) to be set 00:27:16.318 [2024-07-26 02:02:58.307383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a2580 (9): Bad file descriptor 00:27:16.318 [2024-07-26 02:02:58.307450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b05f0 (9): Bad file descriptor 00:27:16.318 [2024-07-26 02:02:58.307489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140a440 (9): Bad file descriptor 00:27:16.318 [2024-07-26 02:02:58.307526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xddd610 (9): Bad file descriptor 00:27:16.318 [2024-07-26 02:02:58.307561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b0230 (9): Bad file descriptor 00:27:16.318 [2024-07-26 02:02:58.307592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b0e50 (9): Bad file descriptor 00:27:16.318 [2024-07-26 02:02:58.307623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1309700 (9): Bad file descriptor 00:27:16.318 [2024-07-26 02:02:58.307653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e5290 (9): Bad file descriptor 00:27:16.318 [2024-07-26 02:02:58.307684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1311760 (9): Bad file descriptor 00:27:16.318 [2024-07-26 02:02:58.307892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a1d10 (9): Bad file descriptor 00:27:16.318 [2024-07-26 02:02:58.307923] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:16.318 [2024-07-26 02:02:58.307939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:16.318 [2024-07-26 02:02:58.307955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:16.318 [2024-07-26 02:02:58.308022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.318 [2024-07-26 02:02:58.308053] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:16.318 [2024-07-26 02:02:58.308076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:16.318 [2024-07-26 02:02:58.308091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:16.318 [2024-07-26 02:02:58.308151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.587 [2024-07-26 02:02:58.315528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:16.587 [2024-07-26 02:02:58.315812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.587 [2024-07-26 02:02:58.315849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a2580 with addr=10.0.0.2, port=4420 00:27:16.587 [2024-07-26 02:02:58.315868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2580 is same with the state(5) to be set 00:27:16.587 [2024-07-26 02:02:58.315936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a2580 (9): Bad file descriptor 00:27:16.587 [2024-07-26 02:02:58.316001] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:16.587 [2024-07-26 02:02:58.316019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:16.587 [2024-07-26 02:02:58.316037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:16.587 [2024-07-26 02:02:58.316119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.587 [2024-07-26 02:02:58.316607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:16.587 [2024-07-26 02:02:58.316774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.587 [2024-07-26 02:02:58.316802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a1d10 with addr=10.0.0.2, port=4420 00:27:16.587 [2024-07-26 02:02:58.316831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1d10 is same with the state(5) to be set 00:27:16.587 [2024-07-26 02:02:58.316893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a1d10 (9): Bad file descriptor 00:27:16.587 [2024-07-26 02:02:58.316954] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:16.587 [2024-07-26 02:02:58.316971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:16.587 [2024-07-26 02:02:58.316985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:16.587 [2024-07-26 02:02:58.317046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.587 [2024-07-26 02:02:58.317446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.587 [2024-07-26 02:02:58.317472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.587 [2024-07-26 02:02:58.317502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.587 [2024-07-26 02:02:58.317518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.587 [2024-07-26 02:02:58.317534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.587 [2024-07-26 02:02:58.317549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.587 [2024-07-26 02:02:58.317565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.587 [2024-07-26 02:02:58.317579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.587 [2024-07-26 02:02:58.317595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.587 [2024-07-26 02:02:58.317609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.587 [2024-07-26 02:02:58.317625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.587 [2024-07-26 02:02:58.317639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.587 [2024-07-26 02:02:58.317655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.587 [2024-07-26 02:02:58.317669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.587 [2024-07-26 02:02:58.317685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.587 [2024-07-26 02:02:58.317698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.587 [2024-07-26 02:02:58.317714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.587 [2024-07-26 02:02:58.317728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.587 [2024-07-26 02:02:58.317744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.587 [2024-07-26 02:02:58.317757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.587 [2024-07-26 02:02:58.317778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.587 [2024-07-26 02:02:58.317793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.587 [2024-07-26 02:02:58.317809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.587 [2024-07-26 02:02:58.317822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.587 [2024-07-26 02:02:58.317838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.587 [2024-07-26 02:02:58.317852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.587 [2024-07-26 02:02:58.317867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.587 [2024-07-26 02:02:58.317881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.587 [2024-07-26 02:02:58.317897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.587 [2024-07-26 02:02:58.317910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.587 [2024-07-26 02:02:58.317926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.587 [2024-07-26 02:02:58.317940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.587 [2024-07-26 02:02:58.317955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.317969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.317984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.317998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.318972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.318986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.319002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.319016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.319032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.319046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.319069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.319085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.319102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.319116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.588 [2024-07-26 02:02:58.319132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.588 [2024-07-26 02:02:58.319146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.319162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.319176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.319192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.319206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.319222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.319236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.319252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.319266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.319282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.319296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.319315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.319330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.319346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.319360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.319376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.319390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.319406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.319420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.319435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148a7c0 is same with the state(5) to be set 00:27:16.589 [2024-07-26 02:02:58.320710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.320733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.320755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.320771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.320787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.320801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.320817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.320831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.320847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.320861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.320877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.320891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.320907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.320920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.320936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.320950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.320966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.320984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.589 [2024-07-26 02:02:58.321599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.589 [2024-07-26 02:02:58.321615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.321629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.321645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.321659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.321675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.321689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.321705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.321719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.321735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.321752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.321769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.321783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.321799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.321813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.321829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.321842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.321859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.321873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.321889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.321903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.321919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.321932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.321948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.321963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.321980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.321994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.322673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.322688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1497960 is same with the state(5) to be set 00:27:16.590 [2024-07-26 02:02:58.323924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.323947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.323968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.590 [2024-07-26 02:02:58.323983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.590 [2024-07-26 02:02:58.324000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.324983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.591 [2024-07-26 02:02:58.324997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.591 [2024-07-26 02:02:58.325012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.325874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.325889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1498dd0 is same with the state(5) to be set 00:27:16.592 [2024-07-26 02:02:58.327141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.327164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.327185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.327201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.327217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.327230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.327246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.327260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.327278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.592 [2024-07-26 02:02:58.327296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.592 [2024-07-26 02:02:58.327313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.327982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.327997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.328011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.328026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.328047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.328071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.328087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.328103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.328116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.328133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.328146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.328163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.328176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.328192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.328206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.328221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.328235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.328251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.328265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.328281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.328294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.328310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.328324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.328339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.328353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.328369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.328383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.328398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.328412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.328428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.328445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.328461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.593 [2024-07-26 02:02:58.328475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.593 [2024-07-26 02:02:58.328491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.328981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.328997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.329011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.329027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.329041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.329057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.329077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.329092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149a310 is same with the state(5) to be set 00:27:16.594 [2024-07-26 02:02:58.330333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-26 02:02:58.330922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-26 02:02:58.330938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.330952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.330968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.330982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.330998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.331966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.331980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.332010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.332025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.332041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.332055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.332080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.332094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.332110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-26 02:02:58.332124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-26 02:02:58.332141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.332155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.332171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.332184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.332200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.332224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.332240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.332254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.332271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.332294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.332310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.332324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.332340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.332354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.332368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e0fe0 is same with the state(5) to be set 00:27:16.596 [2024-07-26 02:02:58.333597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.333620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.333646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.333661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.333677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.333692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.333707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.333721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.333737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.333751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.333767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.333781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.333797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.333811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.333826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.333840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.333856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.333870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.333885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.333899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.333915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.333929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.333944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.333958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.333974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.333989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.334005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.334022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.334039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.334053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.334076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.334091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.334107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.334121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.334137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.334150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.334166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.334180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.334196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.334209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.334225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.334240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.334255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.334269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.334285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.334299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.334315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.334329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.334344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.334358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-26 02:02:58.334373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-26 02:02:58.334387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.334977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.334991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-26 02:02:58.335549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-26 02:02:58.335564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149b1f0 is same with the state(5) to be set 00:27:16.598 [2024-07-26 02:02:58.336801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.336824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.336845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.336860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.336877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.336891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.336907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.336921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.336938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.336952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.336968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.336982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.336997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.598 [2024-07-26 02:02:58.337916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.598 [2024-07-26 02:02:58.337935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.337950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.337965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.337979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.337995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.338731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.338745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1381e30 is same with the state(5) to be set 00:27:16.599 [2024-07-26 02:02:58.339979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.340002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.340024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.340039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.340055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.340078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.340095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.340109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.340125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.340139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.340155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.340169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.340185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.340199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.340215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.340229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.340245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.340259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.340274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.340288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.340309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-26 02:02:58.340324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-26 02:02:58.340340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.340982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.340997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.341012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.341027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.341041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.341066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.341083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.341099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.341113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.341129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.341143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.341159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.341172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.341188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.341201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.341217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.341231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.341246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.341260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.341275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.341289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.341306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.341320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.341336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.341350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.341367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.341381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.341396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.341410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.341426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.341445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-26 02:02:58.341462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-26 02:02:58.341476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-26 02:02:58.341491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-26 02:02:58.341505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-26 02:02:58.341521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-26 02:02:58.341535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-26 02:02:58.341550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-26 02:02:58.341564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-26 02:02:58.341580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-26 02:02:58.341594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-26 02:02:58.341610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-26 02:02:58.341623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-26 02:02:58.341639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-26 02:02:58.341654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-26 02:02:58.341670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-26 02:02:58.341684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-26 02:02:58.341699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-26 02:02:58.341713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-26 02:02:58.341729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-26 02:02:58.341743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-26 02:02:58.341758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-26 02:02:58.341772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-26 02:02:58.341788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-26 02:02:58.341802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-26 02:02:58.341822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-26 02:02:58.341837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-26 02:02:58.341852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-26 02:02:58.341866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-26 02:02:58.341882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-26 02:02:58.341895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-26 02:02:58.341911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-26 02:02:58.341925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-26 02:02:58.341939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1383330 is same with the state(5) to be set 00:27:16.601 [2024-07-26 02:02:58.343950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.601 [2024-07-26 02:02:58.343983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:16.601 [2024-07-26 02:02:58.344002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:16.601 [2024-07-26 02:02:58.344019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:16.601 [2024-07-26 02:02:58.344152] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:16.601 [2024-07-26 02:02:58.344179] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:16.601 [2024-07-26 02:02:58.344199] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:16.601 [2024-07-26 02:02:58.344218] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:16.601 [2024-07-26 02:02:58.344337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:16.601 [2024-07-26 02:02:58.344362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:16.601 [2024-07-26 02:02:58.344379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:16.601 task offset: 19840 on job bdev=Nvme10n1 fails 00:27:16.601 00:27:16.601 Latency(us) 00:27:16.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.601 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.601 Job: Nvme1n1 ended in about 0.83 seconds with error 00:27:16.601 Verification LBA range: start 0x0 length 0x400 00:27:16.601 Nvme1n1 : 0.83 153.30 9.58 76.65 0.00 274960.43 21068.61 262532.36 00:27:16.601 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.601 Job: Nvme2n1 ended in about 0.84 seconds with error 00:27:16.601 Verification LBA range: start 0x0 length 0x400 00:27:16.601 Nvme2n1 : 0.84 152.71 9.54 76.35 0.00 269913.51 20000.62 240784.12 00:27:16.601 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.601 Job: Nvme3n1 ended in about 0.84 seconds with error 00:27:16.601 Verification LBA range: start 0x0 length 0x400 00:27:16.601 Nvme3n1 : 0.84 152.13 9.51 76.06 0.00 264924.48 24758.04 243891.01 00:27:16.601 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.601 Job: Nvme4n1 ended in about 0.84 seconds with error 00:27:16.601 Verification LBA range: start 0x0 length 0x400 00:27:16.601 Nvme4n1 : 0.84 151.55 9.47 75.78 0.00 259875.52 19612.25 265639.25 00:27:16.601 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.601 Job: Nvme5n1 ended in about 0.85 seconds with error 00:27:16.601 Verification LBA range: start 0x0 length 0x400 00:27:16.601 Nvme5n1 : 0.85 150.97 9.44 75.48 0.00 254912.79 19126.80 259425.47 00:27:16.601 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.601 Job: Nvme6n1 ended in about 0.85 seconds with error 00:27:16.601 Verification LBA range: start 0x0 length 0x400 00:27:16.601 Nvme6n1 : 0.85 150.40 9.40 75.20 0.00 249881.28 19515.16 265639.25 00:27:16.601 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.601 Job: Nvme7n1 ended in about 0.85 seconds with error 00:27:16.601 Verification LBA range: start 0x0 length 0x400 00:27:16.601 Nvme7n1 : 0.85 149.84 9.37 74.92 0.00 244864.70 18447.17 264085.81 00:27:16.601 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.601 Job: Nvme8n1 ended in about 0.86 seconds with error 00:27:16.601 Verification LBA range: start 0x0 length 0x400 00:27:16.601 Nvme8n1 : 0.86 149.29 9.33 74.64 0.00 239944.19 36505.98 237677.23 00:27:16.601 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.601 Job: Nvme9n1 ended in about 0.82 seconds with error 00:27:16.601 Verification LBA range: start 0x0 length 0x400 00:27:16.601 Nvme9n1 : 0.82 156.34 9.77 78.17 0.00 221249.11 9757.58 295154.73 00:27:16.601 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.601 Job: Nvme10n1 ended in about 0.82 seconds with error 00:27:16.601 Verification LBA range: start 0x0 length 0x400 00:27:16.601 Nvme10n1 : 0.82 156.66 9.79 78.33 0.00 214996.13 7815.77 271853.04 00:27:16.601 =================================================================================================================== 00:27:16.602 Total : 1523.18 95.20 761.59 0.00 249552.21 7815.77 295154.73 00:27:16.602 [2024-07-26 02:02:58.371766] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:16.602 [2024-07-26 02:02:58.371848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:16.602 [2024-07-26 02:02:58.372192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.602 [2024-07-26 02:02:58.372230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e5290 with addr=10.0.0.2, port=4420 00:27:16.602 [2024-07-26 02:02:58.372251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5290 is same with the state(5) to be set 00:27:16.602 [2024-07-26 02:02:58.372382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.602 [2024-07-26 02:02:58.372409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b0e50 with addr=10.0.0.2, port=4420 00:27:16.602 [2024-07-26 02:02:58.372425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0e50 is same with the state(5) to be set 00:27:16.602 [2024-07-26 02:02:58.372559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.602 [2024-07-26 02:02:58.372585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1311760 with addr=10.0.0.2, port=4420 00:27:16.602 [2024-07-26 02:02:58.372601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311760 is same with the state(5) to be set 00:27:16.602 [2024-07-26 02:02:58.372708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.602 [2024-07-26 02:02:58.372734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1309700 with addr=10.0.0.2, port=4420 00:27:16.602 [2024-07-26 02:02:58.372750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309700 is same with the state(5) to be set 00:27:16.602 [2024-07-26 02:02:58.374998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.602 [2024-07-26 02:02:58.375030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b05f0 with addr=10.0.0.2, port=4420 00:27:16.602 [2024-07-26 02:02:58.375047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b05f0 is same with the state(5) to be set 00:27:16.602 [2024-07-26 02:02:58.375162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.602 [2024-07-26 02:02:58.375187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b0230 with addr=10.0.0.2, port=4420 00:27:16.602 [2024-07-26 02:02:58.375203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0230 is same with the state(5) to be set 00:27:16.602 [2024-07-26 02:02:58.375304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.602 [2024-07-26 02:02:58.375328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140a440 with addr=10.0.0.2, port=4420 00:27:16.602 [2024-07-26 02:02:58.375343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140a440 is same with the state(5) to be set 00:27:16.602 [2024-07-26 02:02:58.375448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.602 [2024-07-26 02:02:58.375473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xddd610 with addr=10.0.0.2, port=4420 00:27:16.602 [2024-07-26 02:02:58.375489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddd610 is same with the state(5) to be set 00:27:16.602 [2024-07-26 02:02:58.375515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e5290 (9): Bad file descriptor 00:27:16.602 [2024-07-26 02:02:58.375539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b0e50 (9): Bad file descriptor 00:27:16.602 [2024-07-26 02:02:58.375557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1311760 (9): Bad file descriptor 00:27:16.602 [2024-07-26 02:02:58.375575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1309700 (9): Bad file descriptor 00:27:16.602 [2024-07-26 02:02:58.375623] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:16.602 [2024-07-26 02:02:58.375647] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:16.602 [2024-07-26 02:02:58.375675] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:16.602 [2024-07-26 02:02:58.375695] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:16.602 [2024-07-26 02:02:58.375715] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:16.602 [2024-07-26 02:02:58.375733] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:16.602 [2024-07-26 02:02:58.375820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:16.602 [2024-07-26 02:02:58.375845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:16.602 [2024-07-26 02:02:58.375898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b05f0 (9): Bad file descriptor 00:27:16.602 [2024-07-26 02:02:58.375924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b0230 (9): Bad file descriptor 00:27:16.602 [2024-07-26 02:02:58.375942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140a440 (9): Bad file descriptor 00:27:16.602 [2024-07-26 02:02:58.375960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xddd610 (9): Bad file descriptor 00:27:16.602 [2024-07-26 02:02:58.375977] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.602 [2024-07-26 02:02:58.375995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.602 [2024-07-26 02:02:58.376013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.602 [2024-07-26 02:02:58.376032] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:16.602 [2024-07-26 02:02:58.376046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:16.602 [2024-07-26 02:02:58.376068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:16.602 [2024-07-26 02:02:58.376087] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:16.602 [2024-07-26 02:02:58.376101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:16.602 [2024-07-26 02:02:58.376114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:16.602 [2024-07-26 02:02:58.376129] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:16.602 [2024-07-26 02:02:58.376143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:16.602 [2024-07-26 02:02:58.376155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:16.602 [2024-07-26 02:02:58.376260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.602 [2024-07-26 02:02:58.376280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.602 [2024-07-26 02:02:58.376292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.602 [2024-07-26 02:02:58.376303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.602 [2024-07-26 02:02:58.376407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.602 [2024-07-26 02:02:58.376432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a2580 with addr=10.0.0.2, port=4420 00:27:16.602 [2024-07-26 02:02:58.376448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a2580 is same with the state(5) to be set 00:27:16.602 [2024-07-26 02:02:58.376553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.602 [2024-07-26 02:02:58.376576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a1d10 with addr=10.0.0.2, port=4420 00:27:16.602 [2024-07-26 02:02:58.376591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1d10 is same with the state(5) to be set 00:27:16.602 [2024-07-26 02:02:58.376605] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:16.602 [2024-07-26 02:02:58.376618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:16.602 [2024-07-26 02:02:58.376631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:16.602 [2024-07-26 02:02:58.376649] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:16.602 [2024-07-26 02:02:58.376662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:16.602 [2024-07-26 02:02:58.376675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:16.602 [2024-07-26 02:02:58.376690] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:16.602 [2024-07-26 02:02:58.376704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:16.602 [2024-07-26 02:02:58.376716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:16.602 [2024-07-26 02:02:58.376732] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:16.602 [2024-07-26 02:02:58.376750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:16.602 [2024-07-26 02:02:58.376763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:16.602 [2024-07-26 02:02:58.376800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.602 [2024-07-26 02:02:58.376818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.602 [2024-07-26 02:02:58.376830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.602 [2024-07-26 02:02:58.376841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.602 [2024-07-26 02:02:58.376857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a2580 (9): Bad file descriptor 00:27:16.602 [2024-07-26 02:02:58.376876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a1d10 (9): Bad file descriptor 00:27:16.602 [2024-07-26 02:02:58.376914] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:16.602 [2024-07-26 02:02:58.376931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:16.602 [2024-07-26 02:02:58.376945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:16.602 [2024-07-26 02:02:58.376961] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:16.602 [2024-07-26 02:02:58.376974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:16.602 [2024-07-26 02:02:58.376987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:16.602 [2024-07-26 02:02:58.377027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.603 [2024-07-26 02:02:58.377044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.863 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:16.863 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2349330 00:27:18.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2349330) - No such process 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:18.234 rmmod nvme_tcp 00:27:18.234 rmmod nvme_fabrics 00:27:18.234 rmmod nvme_keyring 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.234 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.141 02:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:20.141 00:27:20.141 real 0m7.137s 00:27:20.141 user 0m16.540s 00:27:20.141 sys 0m1.443s 00:27:20.141 02:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:20.141 02:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:20.141 ************************************ 00:27:20.141 END TEST nvmf_shutdown_tc3 00:27:20.141 ************************************ 00:27:20.141 02:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:20.141 00:27:20.141 real 0m26.834s 00:27:20.141 user 1m14.391s 00:27:20.141 sys 0m6.368s 00:27:20.141 02:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:20.141 02:03:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:20.141 ************************************ 00:27:20.141 END TEST nvmf_shutdown 00:27:20.141 ************************************ 00:27:20.141 02:03:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:27:20.141 00:27:20.141 real 16m43.719s 00:27:20.141 user 47m8.466s 00:27:20.141 sys 3m48.948s 00:27:20.141 02:03:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:20.141 02:03:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:20.141 ************************************ 00:27:20.141 END TEST nvmf_target_extra 00:27:20.141 ************************************ 00:27:20.141 02:03:01 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:20.141 02:03:01 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:20.141 02:03:01 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:20.141 02:03:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:20.141 ************************************ 00:27:20.141 START TEST nvmf_host 00:27:20.141 ************************************ 00:27:20.141 02:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:20.141 * Looking for test storage... 00:27:20.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:20.141 02:03:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.141 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.142 ************************************ 00:27:20.142 START TEST nvmf_multicontroller 00:27:20.142 ************************************ 00:27:20.142 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:20.142 * Looking for test storage... 00:27:20.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.401 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:20.402 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:20.402 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:20.402 02:03:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:22.302 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:22.302 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:22.302 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.302 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:22.302 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:22.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:27:22.303 00:27:22.303 --- 10.0.0.2 ping statistics --- 00:27:22.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.303 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:27:22.303 00:27:22.303 --- 10.0.0.1 ping statistics --- 00:27:22.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.303 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:22.303 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:22.562 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:22.562 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:22.562 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:22.562 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.562 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2351956 00:27:22.562 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:22.562 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2351956 00:27:22.562 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2351956 ']' 00:27:22.562 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.562 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:22.562 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.562 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:22.562 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.562 [2024-07-26 02:03:04.385640] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:27:22.562 [2024-07-26 02:03:04.385738] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.562 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.562 [2024-07-26 02:03:04.452234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:22.562 [2024-07-26 02:03:04.533491] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.562 [2024-07-26 02:03:04.533547] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.562 [2024-07-26 02:03:04.533574] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.562 [2024-07-26 02:03:04.533586] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.562 [2024-07-26 02:03:04.533596] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.562 [2024-07-26 02:03:04.533681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.562 [2024-07-26 02:03:04.533746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:22.562 [2024-07-26 02:03:04.533749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.820 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:22.820 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:27:22.820 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:22.820 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:22.820 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.820 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.820 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:22.820 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.820 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.820 [2024-07-26 02:03:04.673962] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.820 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.820 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:22.820 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.820 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.820 Malloc0 00:27:22.820 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.821 [2024-07-26 02:03:04.738934] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.821 [2024-07-26 02:03:04.746820] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.821 Malloc1 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2351984 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2351984 /var/tmp/bdevperf.sock 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2351984 ']' 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:22.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:22.821 02:03:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.386 NVMe0n1 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.386 1 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.386 request: 00:27:23.386 { 00:27:23.386 "name": "NVMe0", 00:27:23.386 "trtype": "tcp", 00:27:23.386 "traddr": "10.0.0.2", 00:27:23.386 "adrfam": "ipv4", 00:27:23.386 "trsvcid": "4420", 00:27:23.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:23.386 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:23.386 "hostaddr": "10.0.0.2", 00:27:23.386 "hostsvcid": "60000", 00:27:23.386 "prchk_reftag": false, 00:27:23.386 "prchk_guard": false, 00:27:23.386 "hdgst": false, 00:27:23.386 "ddgst": false, 00:27:23.386 "method": "bdev_nvme_attach_controller", 00:27:23.386 "req_id": 1 00:27:23.386 } 00:27:23.386 Got JSON-RPC error response 00:27:23.386 response: 00:27:23.386 { 00:27:23.386 "code": -114, 00:27:23.386 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:23.386 } 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.386 request: 00:27:23.386 { 00:27:23.386 "name": "NVMe0", 00:27:23.386 "trtype": "tcp", 00:27:23.386 "traddr": "10.0.0.2", 00:27:23.386 "adrfam": "ipv4", 00:27:23.386 "trsvcid": "4420", 00:27:23.386 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:23.386 "hostaddr": "10.0.0.2", 00:27:23.386 "hostsvcid": "60000", 00:27:23.386 "prchk_reftag": false, 00:27:23.386 "prchk_guard": false, 00:27:23.386 "hdgst": false, 00:27:23.386 "ddgst": false, 00:27:23.386 "method": "bdev_nvme_attach_controller", 00:27:23.386 "req_id": 1 00:27:23.386 } 00:27:23.386 Got JSON-RPC error response 00:27:23.386 response: 00:27:23.386 { 00:27:23.386 "code": -114, 00:27:23.386 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:23.386 } 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.386 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.386 request: 00:27:23.386 { 00:27:23.386 "name": "NVMe0", 00:27:23.386 "trtype": "tcp", 00:27:23.386 "traddr": "10.0.0.2", 00:27:23.386 "adrfam": "ipv4", 00:27:23.386 "trsvcid": "4420", 00:27:23.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:23.386 "hostaddr": "10.0.0.2", 00:27:23.386 "hostsvcid": "60000", 00:27:23.386 "prchk_reftag": false, 00:27:23.386 "prchk_guard": false, 00:27:23.386 "hdgst": false, 00:27:23.386 "ddgst": false, 00:27:23.386 "multipath": "disable", 00:27:23.386 "method": "bdev_nvme_attach_controller", 00:27:23.387 "req_id": 1 00:27:23.387 } 00:27:23.387 Got JSON-RPC error response 00:27:23.387 response: 00:27:23.387 { 00:27:23.387 "code": -114, 00:27:23.387 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:23.387 } 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.387 request: 00:27:23.387 { 00:27:23.387 "name": "NVMe0", 00:27:23.387 "trtype": "tcp", 00:27:23.387 "traddr": "10.0.0.2", 00:27:23.387 "adrfam": "ipv4", 00:27:23.387 "trsvcid": "4420", 00:27:23.387 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:23.387 "hostaddr": "10.0.0.2", 00:27:23.387 "hostsvcid": "60000", 00:27:23.387 "prchk_reftag": false, 00:27:23.387 "prchk_guard": false, 00:27:23.387 "hdgst": false, 00:27:23.387 "ddgst": false, 00:27:23.387 "multipath": "failover", 00:27:23.387 "method": "bdev_nvme_attach_controller", 00:27:23.387 "req_id": 1 00:27:23.387 } 00:27:23.387 Got JSON-RPC error response 00:27:23.387 response: 00:27:23.387 { 00:27:23.387 "code": -114, 00:27:23.387 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:23.387 } 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.387 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.387 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.644 00:27:23.644 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.644 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:23.644 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:23.644 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.644 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.644 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.644 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:23.644 02:03:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:25.021 0 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2351984 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2351984 ']' 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2351984 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2351984 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2351984' 00:27:25.021 killing process with pid 2351984 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2351984 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2351984 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:27:25.021 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:25.021 [2024-07-26 02:03:04.852433] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:27:25.021 [2024-07-26 02:03:04.852519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2351984 ] 00:27:25.021 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.021 [2024-07-26 02:03:04.914882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.021 [2024-07-26 02:03:05.001755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.021 [2024-07-26 02:03:05.530599] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name b5f84daf-668e-41ae-874e-404c022c3b44 already exists 00:27:25.021 [2024-07-26 02:03:05.530639] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:b5f84daf-668e-41ae-874e-404c022c3b44 alias for bdev NVMe1n1 00:27:25.021 [2024-07-26 02:03:05.530670] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:25.021 Running I/O for 1 seconds... 00:27:25.021 00:27:25.021 Latency(us) 00:27:25.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.021 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:25.021 NVMe0n1 : 1.01 19404.42 75.80 0.00 0.00 6586.10 4660.34 15437.37 00:27:25.021 =================================================================================================================== 00:27:25.021 Total : 19404.42 75.80 0.00 0.00 6586.10 4660.34 15437.37 00:27:25.021 Received shutdown signal, test time was about 1.000000 seconds 00:27:25.021 00:27:25.021 Latency(us) 00:27:25.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.021 =================================================================================================================== 00:27:25.021 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:25.021 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:25.021 02:03:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:25.021 rmmod nvme_tcp 00:27:25.021 rmmod nvme_fabrics 00:27:25.021 rmmod nvme_keyring 00:27:25.021 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:25.021 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:25.021 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:25.021 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2351956 ']' 00:27:25.021 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2351956 00:27:25.021 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2351956 ']' 00:27:25.021 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2351956 00:27:25.021 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:27:25.021 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:25.021 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2351956 00:27:25.280 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:25.280 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:25.280 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2351956' 00:27:25.280 killing process with pid 2351956 00:27:25.280 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2351956 00:27:25.280 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2351956 00:27:25.538 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:25.538 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:25.538 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:25.538 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:25.538 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:25.538 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.538 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.538 02:03:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:27.451 00:27:27.451 real 0m7.250s 00:27:27.451 user 0m11.224s 00:27:27.451 sys 0m2.245s 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.451 ************************************ 00:27:27.451 END TEST nvmf_multicontroller 00:27:27.451 ************************************ 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.451 ************************************ 00:27:27.451 START TEST nvmf_aer 00:27:27.451 ************************************ 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:27.451 * Looking for test storage... 00:27:27.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.451 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.709 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:27.709 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:27.709 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.709 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:27.709 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:27.709 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:27.709 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.709 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.709 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.709 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:27.709 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:27.709 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:27.709 02:03:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:29.614 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:29.614 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:29.614 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.614 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:29.614 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:29.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:27:29.615 00:27:29.615 --- 10.0.0.2 ping statistics --- 00:27:29.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.615 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:27:29.615 00:27:29.615 --- 10.0.0.1 ping statistics --- 00:27:29.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.615 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2354692 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2354692 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2354692 ']' 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:29.615 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.615 [2024-07-26 02:03:11.604707] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:27:29.615 [2024-07-26 02:03:11.604794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.874 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.874 [2024-07-26 02:03:11.669146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.874 [2024-07-26 02:03:11.757499] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.874 [2024-07-26 02:03:11.757549] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.874 [2024-07-26 02:03:11.757569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.874 [2024-07-26 02:03:11.757580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.874 [2024-07-26 02:03:11.757589] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.874 [2024-07-26 02:03:11.758420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.874 [2024-07-26 02:03:11.758500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.874 [2024-07-26 02:03:11.758528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.874 [2024-07-26 02:03:11.758531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.874 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:29.874 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:27:29.874 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:29.874 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:29.874 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.134 [2024-07-26 02:03:11.908460] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.134 Malloc0 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.134 [2024-07-26 02:03:11.961103] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:30.134 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.135 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.135 [ 00:27:30.135 { 00:27:30.135 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:30.135 "subtype": "Discovery", 00:27:30.135 "listen_addresses": [], 00:27:30.135 "allow_any_host": true, 00:27:30.135 "hosts": [] 00:27:30.135 }, 00:27:30.135 { 00:27:30.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:30.135 "subtype": "NVMe", 00:27:30.135 "listen_addresses": [ 00:27:30.135 { 00:27:30.135 "trtype": "TCP", 00:27:30.135 "adrfam": "IPv4", 00:27:30.135 "traddr": "10.0.0.2", 00:27:30.135 "trsvcid": "4420" 00:27:30.135 } 00:27:30.135 ], 00:27:30.135 "allow_any_host": true, 00:27:30.135 "hosts": [], 00:27:30.135 "serial_number": "SPDK00000000000001", 00:27:30.135 "model_number": "SPDK bdev Controller", 00:27:30.135 "max_namespaces": 2, 00:27:30.135 "min_cntlid": 1, 00:27:30.135 "max_cntlid": 65519, 00:27:30.135 "namespaces": [ 00:27:30.135 { 00:27:30.135 "nsid": 1, 00:27:30.135 "bdev_name": "Malloc0", 00:27:30.135 "name": "Malloc0", 00:27:30.135 "nguid": "0FA6A56058E84B5FB4E609ED9DC0AB7C", 00:27:30.135 "uuid": "0fa6a560-58e8-4b5f-b4e6-09ed9dc0ab7c" 00:27:30.135 } 00:27:30.135 ] 00:27:30.135 } 00:27:30.135 ] 00:27:30.135 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.135 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:30.135 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:30.135 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2354842 00:27:30.135 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:30.135 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:30.135 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:27:30.135 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:30.135 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:27:30.135 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:27:30.135 02:03:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:30.135 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.135 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:30.135 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:27:30.135 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:27:30.135 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:30.394 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:30.394 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:30.394 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:27:30.394 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:30.394 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.394 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.394 Malloc1 00:27:30.394 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.394 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:30.394 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.394 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.394 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.394 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:30.394 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.394 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.394 [ 00:27:30.394 { 00:27:30.394 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:30.394 "subtype": "Discovery", 00:27:30.394 "listen_addresses": [], 00:27:30.394 "allow_any_host": true, 00:27:30.394 "hosts": [] 00:27:30.394 }, 00:27:30.394 { 00:27:30.394 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:30.394 "subtype": "NVMe", 00:27:30.394 "listen_addresses": [ 00:27:30.394 { 00:27:30.394 "trtype": "TCP", 00:27:30.394 "adrfam": "IPv4", 00:27:30.394 "traddr": "10.0.0.2", 00:27:30.394 "trsvcid": "4420" 00:27:30.394 } 00:27:30.394 ], 00:27:30.394 "allow_any_host": true, 00:27:30.394 "hosts": [], 00:27:30.394 "serial_number": "SPDK00000000000001", 00:27:30.394 "model_number": "SPDK bdev Controller", 00:27:30.394 "max_namespaces": 2, 00:27:30.394 "min_cntlid": 1, 00:27:30.394 "max_cntlid": 65519, 00:27:30.394 "namespaces": [ 00:27:30.394 { 00:27:30.394 "nsid": 1, 00:27:30.394 "bdev_name": "Malloc0", 00:27:30.394 "name": "Malloc0", 00:27:30.394 "nguid": "0FA6A56058E84B5FB4E609ED9DC0AB7C", 00:27:30.394 "uuid": "0fa6a560-58e8-4b5f-b4e6-09ed9dc0ab7c" 00:27:30.394 }, 00:27:30.394 { 00:27:30.394 "nsid": 2, 00:27:30.394 "bdev_name": "Malloc1", 00:27:30.394 "name": "Malloc1", 00:27:30.394 "nguid": "55BE4586B035471690369829EE2B5F7E", 00:27:30.394 "uuid": "55be4586-b035-4716-9036-9829ee2b5f7e" 00:27:30.394 } 00:27:30.394 ] 00:27:30.394 } 00:27:30.394 ] 00:27:30.394 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.394 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2354842 00:27:30.394 Asynchronous Event Request test 00:27:30.394 Attaching to 10.0.0.2 00:27:30.394 Attached to 10.0.0.2 00:27:30.395 Registering asynchronous event callbacks... 00:27:30.395 Starting namespace attribute notice tests for all controllers... 00:27:30.395 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:30.395 aer_cb - Changed Namespace 00:27:30.395 Cleaning up... 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:30.395 rmmod nvme_tcp 00:27:30.395 rmmod nvme_fabrics 00:27:30.395 rmmod nvme_keyring 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2354692 ']' 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2354692 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2354692 ']' 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2354692 00:27:30.395 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:27:30.654 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:30.654 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2354692 00:27:30.654 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:30.654 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:30.654 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2354692' 00:27:30.654 killing process with pid 2354692 00:27:30.654 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2354692 00:27:30.654 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2354692 00:27:30.914 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:30.914 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:30.914 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:30.914 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:30.914 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:30.914 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.914 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.914 02:03:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.851 02:03:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:32.851 00:27:32.851 real 0m5.312s 00:27:32.851 user 0m4.206s 00:27:32.851 sys 0m1.859s 00:27:32.851 02:03:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:32.851 02:03:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:32.851 ************************************ 00:27:32.851 END TEST nvmf_aer 00:27:32.851 ************************************ 00:27:32.851 02:03:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:32.851 02:03:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:32.851 02:03:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.852 ************************************ 00:27:32.852 START TEST nvmf_async_init 00:27:32.852 ************************************ 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:32.852 * Looking for test storage... 00:27:32.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8c3bcbf3f29d4694a293201c64aa8dd8 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:32.852 02:03:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:35.388 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:35.388 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:35.388 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:35.388 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:35.388 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:35.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:35.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:27:35.389 00:27:35.389 --- 10.0.0.2 ping statistics --- 00:27:35.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.389 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:35.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:35.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:27:35.389 00:27:35.389 --- 10.0.0.1 ping statistics --- 00:27:35.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.389 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:35.389 02:03:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2356781 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2356781 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2356781 ']' 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.389 [2024-07-26 02:03:17.051778] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:27:35.389 [2024-07-26 02:03:17.051853] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.389 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.389 [2024-07-26 02:03:17.115787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.389 [2024-07-26 02:03:17.199227] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.389 [2024-07-26 02:03:17.199279] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.389 [2024-07-26 02:03:17.199302] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.389 [2024-07-26 02:03:17.199313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.389 [2024-07-26 02:03:17.199324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.389 [2024-07-26 02:03:17.199375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.389 [2024-07-26 02:03:17.329930] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.389 null0 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8c3bcbf3f29d4694a293201c64aa8dd8 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.389 [2024-07-26 02:03:17.370199] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.389 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.647 nvme0n1 00:27:35.647 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.647 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:35.647 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.647 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.647 [ 00:27:35.647 { 00:27:35.647 "name": "nvme0n1", 00:27:35.647 "aliases": [ 00:27:35.647 "8c3bcbf3-f29d-4694-a293-201c64aa8dd8" 00:27:35.647 ], 00:27:35.647 "product_name": "NVMe disk", 00:27:35.647 "block_size": 512, 00:27:35.647 "num_blocks": 2097152, 00:27:35.647 "uuid": "8c3bcbf3-f29d-4694-a293-201c64aa8dd8", 00:27:35.647 "assigned_rate_limits": { 00:27:35.647 "rw_ios_per_sec": 0, 00:27:35.647 "rw_mbytes_per_sec": 0, 00:27:35.647 "r_mbytes_per_sec": 0, 00:27:35.647 "w_mbytes_per_sec": 0 00:27:35.647 }, 00:27:35.647 "claimed": false, 00:27:35.647 "zoned": false, 00:27:35.647 "supported_io_types": { 00:27:35.647 "read": true, 00:27:35.647 "write": true, 00:27:35.647 "unmap": false, 00:27:35.647 "flush": true, 00:27:35.647 "reset": true, 00:27:35.647 "nvme_admin": true, 00:27:35.647 "nvme_io": true, 00:27:35.647 "nvme_io_md": false, 00:27:35.647 "write_zeroes": true, 00:27:35.647 "zcopy": false, 00:27:35.647 "get_zone_info": false, 00:27:35.647 "zone_management": false, 00:27:35.647 "zone_append": false, 00:27:35.647 "compare": true, 00:27:35.647 "compare_and_write": true, 00:27:35.647 "abort": true, 00:27:35.647 "seek_hole": false, 00:27:35.647 "seek_data": false, 00:27:35.647 "copy": true, 00:27:35.647 "nvme_iov_md": false 00:27:35.647 }, 00:27:35.647 "memory_domains": [ 00:27:35.647 { 00:27:35.647 "dma_device_id": "system", 00:27:35.647 "dma_device_type": 1 00:27:35.647 } 00:27:35.647 ], 00:27:35.647 "driver_specific": { 00:27:35.647 "nvme": [ 00:27:35.647 { 00:27:35.647 "trid": { 00:27:35.647 "trtype": "TCP", 00:27:35.647 "adrfam": "IPv4", 00:27:35.647 "traddr": "10.0.0.2", 00:27:35.647 "trsvcid": "4420", 00:27:35.647 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:35.647 }, 00:27:35.647 "ctrlr_data": { 00:27:35.647 "cntlid": 1, 00:27:35.647 "vendor_id": "0x8086", 00:27:35.647 "model_number": "SPDK bdev Controller", 00:27:35.647 "serial_number": "00000000000000000000", 00:27:35.647 "firmware_revision": "24.09", 00:27:35.647 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:35.647 "oacs": { 00:27:35.647 "security": 0, 00:27:35.647 "format": 0, 00:27:35.647 "firmware": 0, 00:27:35.647 "ns_manage": 0 00:27:35.647 }, 00:27:35.647 "multi_ctrlr": true, 00:27:35.647 "ana_reporting": false 00:27:35.647 }, 00:27:35.647 "vs": { 00:27:35.647 "nvme_version": "1.3" 00:27:35.647 }, 00:27:35.647 "ns_data": { 00:27:35.647 "id": 1, 00:27:35.647 "can_share": true 00:27:35.647 } 00:27:35.647 } 00:27:35.647 ], 00:27:35.647 "mp_policy": "active_passive" 00:27:35.647 } 00:27:35.647 } 00:27:35.647 ] 00:27:35.647 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.647 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:35.647 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.647 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.647 [2024-07-26 02:03:17.623580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:35.647 [2024-07-26 02:03:17.623685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd66680 (9): Bad file descriptor 00:27:35.906 [2024-07-26 02:03:17.756228] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:35.906 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.906 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:35.906 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.906 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.906 [ 00:27:35.906 { 00:27:35.906 "name": "nvme0n1", 00:27:35.906 "aliases": [ 00:27:35.906 "8c3bcbf3-f29d-4694-a293-201c64aa8dd8" 00:27:35.906 ], 00:27:35.906 "product_name": "NVMe disk", 00:27:35.906 "block_size": 512, 00:27:35.906 "num_blocks": 2097152, 00:27:35.906 "uuid": "8c3bcbf3-f29d-4694-a293-201c64aa8dd8", 00:27:35.906 "assigned_rate_limits": { 00:27:35.906 "rw_ios_per_sec": 0, 00:27:35.906 "rw_mbytes_per_sec": 0, 00:27:35.906 "r_mbytes_per_sec": 0, 00:27:35.906 "w_mbytes_per_sec": 0 00:27:35.906 }, 00:27:35.906 "claimed": false, 00:27:35.906 "zoned": false, 00:27:35.906 "supported_io_types": { 00:27:35.906 "read": true, 00:27:35.906 "write": true, 00:27:35.906 "unmap": false, 00:27:35.906 "flush": true, 00:27:35.906 "reset": true, 00:27:35.906 "nvme_admin": true, 00:27:35.906 "nvme_io": true, 00:27:35.906 "nvme_io_md": false, 00:27:35.906 "write_zeroes": true, 00:27:35.906 "zcopy": false, 00:27:35.906 "get_zone_info": false, 00:27:35.906 "zone_management": false, 00:27:35.906 "zone_append": false, 00:27:35.906 "compare": true, 00:27:35.906 "compare_and_write": true, 00:27:35.906 "abort": true, 00:27:35.906 "seek_hole": false, 00:27:35.906 "seek_data": false, 00:27:35.906 "copy": true, 00:27:35.906 "nvme_iov_md": false 00:27:35.906 }, 00:27:35.906 "memory_domains": [ 00:27:35.906 { 00:27:35.906 "dma_device_id": "system", 00:27:35.906 "dma_device_type": 1 00:27:35.906 } 00:27:35.906 ], 00:27:35.906 "driver_specific": { 00:27:35.906 "nvme": [ 00:27:35.906 { 00:27:35.906 "trid": { 00:27:35.906 "trtype": "TCP", 00:27:35.906 "adrfam": "IPv4", 00:27:35.906 "traddr": "10.0.0.2", 00:27:35.906 "trsvcid": "4420", 00:27:35.906 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:35.906 }, 00:27:35.906 "ctrlr_data": { 00:27:35.906 "cntlid": 2, 00:27:35.906 "vendor_id": "0x8086", 00:27:35.906 "model_number": "SPDK bdev Controller", 00:27:35.906 "serial_number": "00000000000000000000", 00:27:35.906 "firmware_revision": "24.09", 00:27:35.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:35.906 "oacs": { 00:27:35.906 "security": 0, 00:27:35.906 "format": 0, 00:27:35.906 "firmware": 0, 00:27:35.906 "ns_manage": 0 00:27:35.906 }, 00:27:35.906 "multi_ctrlr": true, 00:27:35.906 "ana_reporting": false 00:27:35.906 }, 00:27:35.906 "vs": { 00:27:35.906 "nvme_version": "1.3" 00:27:35.906 }, 00:27:35.906 "ns_data": { 00:27:35.906 "id": 1, 00:27:35.906 "can_share": true 00:27:35.906 } 00:27:35.906 } 00:27:35.906 ], 00:27:35.906 "mp_policy": "active_passive" 00:27:35.906 } 00:27:35.906 } 00:27:35.906 ] 00:27:35.906 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.906 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.906 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.906 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.906 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.906 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:35.906 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.G6zB1KsOn5 00:27:35.906 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.G6zB1KsOn5 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.907 [2024-07-26 02:03:17.808249] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:35.907 [2024-07-26 02:03:17.808388] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G6zB1KsOn5 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.907 [2024-07-26 02:03:17.816257] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G6zB1KsOn5 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.907 [2024-07-26 02:03:17.824284] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:35.907 [2024-07-26 02:03:17.824355] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:35.907 nvme0n1 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.907 [ 00:27:35.907 { 00:27:35.907 "name": "nvme0n1", 00:27:35.907 "aliases": [ 00:27:35.907 "8c3bcbf3-f29d-4694-a293-201c64aa8dd8" 00:27:35.907 ], 00:27:35.907 "product_name": "NVMe disk", 00:27:35.907 "block_size": 512, 00:27:35.907 "num_blocks": 2097152, 00:27:35.907 "uuid": "8c3bcbf3-f29d-4694-a293-201c64aa8dd8", 00:27:35.907 "assigned_rate_limits": { 00:27:35.907 "rw_ios_per_sec": 0, 00:27:35.907 "rw_mbytes_per_sec": 0, 00:27:35.907 "r_mbytes_per_sec": 0, 00:27:35.907 "w_mbytes_per_sec": 0 00:27:35.907 }, 00:27:35.907 "claimed": false, 00:27:35.907 "zoned": false, 00:27:35.907 "supported_io_types": { 00:27:35.907 "read": true, 00:27:35.907 "write": true, 00:27:35.907 "unmap": false, 00:27:35.907 "flush": true, 00:27:35.907 "reset": true, 00:27:35.907 "nvme_admin": true, 00:27:35.907 "nvme_io": true, 00:27:35.907 "nvme_io_md": false, 00:27:35.907 "write_zeroes": true, 00:27:35.907 "zcopy": false, 00:27:35.907 "get_zone_info": false, 00:27:35.907 "zone_management": false, 00:27:35.907 "zone_append": false, 00:27:35.907 "compare": true, 00:27:35.907 "compare_and_write": true, 00:27:35.907 "abort": true, 00:27:35.907 "seek_hole": false, 00:27:35.907 "seek_data": false, 00:27:35.907 "copy": true, 00:27:35.907 "nvme_iov_md": false 00:27:35.907 }, 00:27:35.907 "memory_domains": [ 00:27:35.907 { 00:27:35.907 "dma_device_id": "system", 00:27:35.907 "dma_device_type": 1 00:27:35.907 } 00:27:35.907 ], 00:27:35.907 "driver_specific": { 00:27:35.907 "nvme": [ 00:27:35.907 { 00:27:35.907 "trid": { 00:27:35.907 "trtype": "TCP", 00:27:35.907 "adrfam": "IPv4", 00:27:35.907 "traddr": "10.0.0.2", 00:27:35.907 "trsvcid": "4421", 00:27:35.907 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:35.907 }, 00:27:35.907 "ctrlr_data": { 00:27:35.907 "cntlid": 3, 00:27:35.907 "vendor_id": "0x8086", 00:27:35.907 "model_number": "SPDK bdev Controller", 00:27:35.907 "serial_number": "00000000000000000000", 00:27:35.907 "firmware_revision": "24.09", 00:27:35.907 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:35.907 "oacs": { 00:27:35.907 "security": 0, 00:27:35.907 "format": 0, 00:27:35.907 "firmware": 0, 00:27:35.907 "ns_manage": 0 00:27:35.907 }, 00:27:35.907 "multi_ctrlr": true, 00:27:35.907 "ana_reporting": false 00:27:35.907 }, 00:27:35.907 "vs": { 00:27:35.907 "nvme_version": "1.3" 00:27:35.907 }, 00:27:35.907 "ns_data": { 00:27:35.907 "id": 1, 00:27:35.907 "can_share": true 00:27:35.907 } 00:27:35.907 } 00:27:35.907 ], 00:27:35.907 "mp_policy": "active_passive" 00:27:35.907 } 00:27:35.907 } 00:27:35.907 ] 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.907 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.G6zB1KsOn5 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:36.168 rmmod nvme_tcp 00:27:36.168 rmmod nvme_fabrics 00:27:36.168 rmmod nvme_keyring 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2356781 ']' 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2356781 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2356781 ']' 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2356781 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:36.168 02:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2356781 00:27:36.168 02:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:36.168 02:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:36.168 02:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2356781' 00:27:36.168 killing process with pid 2356781 00:27:36.168 02:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2356781 00:27:36.168 [2024-07-26 02:03:18.008236] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:36.168 [2024-07-26 02:03:18.008266] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:36.168 02:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2356781 00:27:36.429 02:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:36.429 02:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:36.429 02:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:36.429 02:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.429 02:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:36.429 02:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.429 02:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.429 02:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.333 02:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:38.333 00:27:38.333 real 0m5.501s 00:27:38.333 user 0m2.016s 00:27:38.333 sys 0m1.849s 00:27:38.333 02:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:38.333 02:03:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.333 ************************************ 00:27:38.333 END TEST nvmf_async_init 00:27:38.333 ************************************ 00:27:38.333 02:03:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:38.333 02:03:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:38.333 02:03:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:38.333 02:03:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.333 ************************************ 00:27:38.333 START TEST dma 00:27:38.333 ************************************ 00:27:38.333 02:03:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:38.593 * Looking for test storage... 00:27:38.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:38.593 00:27:38.593 real 0m0.075s 00:27:38.593 user 0m0.034s 00:27:38.593 sys 0m0.046s 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:38.593 ************************************ 00:27:38.593 END TEST dma 00:27:38.593 ************************************ 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.593 ************************************ 00:27:38.593 START TEST nvmf_identify 00:27:38.593 ************************************ 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:38.593 * Looking for test storage... 00:27:38.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.593 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.594 02:03:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:40.499 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:40.499 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:40.500 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:40.500 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:40.500 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:40.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:27:40.500 00:27:40.500 --- 10.0.0.2 ping statistics --- 00:27:40.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.500 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:27:40.500 00:27:40.500 --- 10.0.0.1 ping statistics --- 00:27:40.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.500 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:40.500 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:40.760 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:40.760 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:40.760 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:40.760 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2358900 00:27:40.760 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:40.760 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:40.760 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2358900 00:27:40.760 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2358900 ']' 00:27:40.760 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.760 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:40.760 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.760 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:40.760 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:40.760 [2024-07-26 02:03:22.577627] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:27:40.760 [2024-07-26 02:03:22.577700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.760 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.760 [2024-07-26 02:03:22.641592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:40.760 [2024-07-26 02:03:22.731689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.760 [2024-07-26 02:03:22.731743] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.760 [2024-07-26 02:03:22.731765] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.760 [2024-07-26 02:03:22.731776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.760 [2024-07-26 02:03:22.731785] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.760 [2024-07-26 02:03:22.731874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.760 [2024-07-26 02:03:22.731940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.760 [2024-07-26 02:03:22.732008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.760 [2024-07-26 02:03:22.732010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.021 [2024-07-26 02:03:22.861464] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.021 Malloc0 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.021 [2024-07-26 02:03:22.943278] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.021 [ 00:27:41.021 { 00:27:41.021 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:41.021 "subtype": "Discovery", 00:27:41.021 "listen_addresses": [ 00:27:41.021 { 00:27:41.021 "trtype": "TCP", 00:27:41.021 "adrfam": "IPv4", 00:27:41.021 "traddr": "10.0.0.2", 00:27:41.021 "trsvcid": "4420" 00:27:41.021 } 00:27:41.021 ], 00:27:41.021 "allow_any_host": true, 00:27:41.021 "hosts": [] 00:27:41.021 }, 00:27:41.021 { 00:27:41.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:41.021 "subtype": "NVMe", 00:27:41.021 "listen_addresses": [ 00:27:41.021 { 00:27:41.021 "trtype": "TCP", 00:27:41.021 "adrfam": "IPv4", 00:27:41.021 "traddr": "10.0.0.2", 00:27:41.021 "trsvcid": "4420" 00:27:41.021 } 00:27:41.021 ], 00:27:41.021 "allow_any_host": true, 00:27:41.021 "hosts": [], 00:27:41.021 "serial_number": "SPDK00000000000001", 00:27:41.021 "model_number": "SPDK bdev Controller", 00:27:41.021 "max_namespaces": 32, 00:27:41.021 "min_cntlid": 1, 00:27:41.021 "max_cntlid": 65519, 00:27:41.021 "namespaces": [ 00:27:41.021 { 00:27:41.021 "nsid": 1, 00:27:41.021 "bdev_name": "Malloc0", 00:27:41.021 "name": "Malloc0", 00:27:41.021 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:41.021 "eui64": "ABCDEF0123456789", 00:27:41.021 "uuid": "837ba739-874a-4fa7-ad3b-32519513be70" 00:27:41.021 } 00:27:41.021 ] 00:27:41.021 } 00:27:41.021 ] 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.021 02:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:41.021 [2024-07-26 02:03:22.986676] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:27:41.021 [2024-07-26 02:03:22.986725] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358928 ] 00:27:41.021 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.021 [2024-07-26 02:03:23.023737] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:41.021 [2024-07-26 02:03:23.023803] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:41.021 [2024-07-26 02:03:23.023814] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:41.021 [2024-07-26 02:03:23.023828] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:41.021 [2024-07-26 02:03:23.023843] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:41.021 [2024-07-26 02:03:23.024173] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:41.021 [2024-07-26 02:03:23.024230] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16b5fe0 0 00:27:41.284 [2024-07-26 02:03:23.035089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:41.284 [2024-07-26 02:03:23.035117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:41.284 [2024-07-26 02:03:23.035129] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:41.284 [2024-07-26 02:03:23.035136] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:41.284 [2024-07-26 02:03:23.035195] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.035209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.035218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b5fe0) 00:27:41.284 [2024-07-26 02:03:23.035240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:41.284 [2024-07-26 02:03:23.035268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c880, cid 0, qid 0 00:27:41.284 [2024-07-26 02:03:23.043094] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.284 [2024-07-26 02:03:23.043115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.284 [2024-07-26 02:03:23.043123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.043133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171c880) on tqpair=0x16b5fe0 00:27:41.284 [2024-07-26 02:03:23.043157] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:41.284 [2024-07-26 02:03:23.043169] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:41.284 [2024-07-26 02:03:23.043180] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:41.284 [2024-07-26 02:03:23.043204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.043213] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.043220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b5fe0) 00:27:41.284 [2024-07-26 02:03:23.043232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.284 [2024-07-26 02:03:23.043256] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c880, cid 0, qid 0 00:27:41.284 [2024-07-26 02:03:23.043413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.284 [2024-07-26 02:03:23.043429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.284 [2024-07-26 02:03:23.043435] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.043443] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171c880) on tqpair=0x16b5fe0 00:27:41.284 [2024-07-26 02:03:23.043457] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:41.284 [2024-07-26 02:03:23.043471] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:41.284 [2024-07-26 02:03:23.043484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.043491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.043498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b5fe0) 00:27:41.284 [2024-07-26 02:03:23.043509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.284 [2024-07-26 02:03:23.043530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c880, cid 0, qid 0 00:27:41.284 [2024-07-26 02:03:23.043635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.284 [2024-07-26 02:03:23.043646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.284 [2024-07-26 02:03:23.043653] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.043660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171c880) on tqpair=0x16b5fe0 00:27:41.284 [2024-07-26 02:03:23.043669] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:41.284 [2024-07-26 02:03:23.043684] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:41.284 [2024-07-26 02:03:23.043696] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.043704] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.043711] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b5fe0) 00:27:41.284 [2024-07-26 02:03:23.043721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.284 [2024-07-26 02:03:23.043742] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c880, cid 0, qid 0 00:27:41.284 [2024-07-26 02:03:23.043839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.284 [2024-07-26 02:03:23.043853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.284 [2024-07-26 02:03:23.043860] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.043867] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171c880) on tqpair=0x16b5fe0 00:27:41.284 [2024-07-26 02:03:23.043881] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:41.284 [2024-07-26 02:03:23.043899] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.043909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.043915] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b5fe0) 00:27:41.284 [2024-07-26 02:03:23.043926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.284 [2024-07-26 02:03:23.043947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c880, cid 0, qid 0 00:27:41.284 [2024-07-26 02:03:23.044051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.284 [2024-07-26 02:03:23.044074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.284 [2024-07-26 02:03:23.044082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.044089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171c880) on tqpair=0x16b5fe0 00:27:41.284 [2024-07-26 02:03:23.044099] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:41.284 [2024-07-26 02:03:23.044108] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:41.284 [2024-07-26 02:03:23.044122] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:41.284 [2024-07-26 02:03:23.044233] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:41.284 [2024-07-26 02:03:23.044242] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:41.284 [2024-07-26 02:03:23.044258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.044266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.044272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b5fe0) 00:27:41.284 [2024-07-26 02:03:23.044283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.284 [2024-07-26 02:03:23.044305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c880, cid 0, qid 0 00:27:41.284 [2024-07-26 02:03:23.044444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.284 [2024-07-26 02:03:23.044459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.284 [2024-07-26 02:03:23.044466] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.044473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171c880) on tqpair=0x16b5fe0 00:27:41.284 [2024-07-26 02:03:23.044482] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:41.284 [2024-07-26 02:03:23.044498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.044507] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.044513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b5fe0) 00:27:41.284 [2024-07-26 02:03:23.044524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.284 [2024-07-26 02:03:23.044545] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c880, cid 0, qid 0 00:27:41.284 [2024-07-26 02:03:23.044643] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.284 [2024-07-26 02:03:23.044658] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.284 [2024-07-26 02:03:23.044668] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.044676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171c880) on tqpair=0x16b5fe0 00:27:41.284 [2024-07-26 02:03:23.044684] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:41.284 [2024-07-26 02:03:23.044692] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:41.284 [2024-07-26 02:03:23.044707] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:41.284 [2024-07-26 02:03:23.044728] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:41.284 [2024-07-26 02:03:23.044748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.284 [2024-07-26 02:03:23.044756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b5fe0) 00:27:41.284 [2024-07-26 02:03:23.044767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.285 [2024-07-26 02:03:23.044789] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c880, cid 0, qid 0 00:27:41.285 [2024-07-26 02:03:23.044915] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.285 [2024-07-26 02:03:23.044930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.285 [2024-07-26 02:03:23.044937] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.044945] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b5fe0): datao=0, datal=4096, cccid=0 00:27:41.285 [2024-07-26 02:03:23.044954] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x171c880) on tqpair(0x16b5fe0): expected_datao=0, payload_size=4096 00:27:41.285 [2024-07-26 02:03:23.044962] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.044993] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045004] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045070] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.285 [2024-07-26 02:03:23.045086] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.285 [2024-07-26 02:03:23.045093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171c880) on tqpair=0x16b5fe0 00:27:41.285 [2024-07-26 02:03:23.045112] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:41.285 [2024-07-26 02:03:23.045121] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:41.285 [2024-07-26 02:03:23.045129] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:41.285 [2024-07-26 02:03:23.045139] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:41.285 [2024-07-26 02:03:23.045148] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:41.285 [2024-07-26 02:03:23.045156] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:41.285 [2024-07-26 02:03:23.045172] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:41.285 [2024-07-26 02:03:23.045190] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045198] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b5fe0) 00:27:41.285 [2024-07-26 02:03:23.045220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:41.285 [2024-07-26 02:03:23.045243] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c880, cid 0, qid 0 00:27:41.285 [2024-07-26 02:03:23.045362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.285 [2024-07-26 02:03:23.045377] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.285 [2024-07-26 02:03:23.045384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045391] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171c880) on tqpair=0x16b5fe0 00:27:41.285 [2024-07-26 02:03:23.045405] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045419] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b5fe0) 00:27:41.285 [2024-07-26 02:03:23.045430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.285 [2024-07-26 02:03:23.045440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045454] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16b5fe0) 00:27:41.285 [2024-07-26 02:03:23.045463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.285 [2024-07-26 02:03:23.045473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045480] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16b5fe0) 00:27:41.285 [2024-07-26 02:03:23.045495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.285 [2024-07-26 02:03:23.045505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045512] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045533] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b5fe0) 00:27:41.285 [2024-07-26 02:03:23.045542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.285 [2024-07-26 02:03:23.045551] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:41.285 [2024-07-26 02:03:23.045571] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:41.285 [2024-07-26 02:03:23.045584] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b5fe0) 00:27:41.285 [2024-07-26 02:03:23.045602] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.285 [2024-07-26 02:03:23.045624] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171c880, cid 0, qid 0 00:27:41.285 [2024-07-26 02:03:23.045651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171ca00, cid 1, qid 0 00:27:41.285 [2024-07-26 02:03:23.045659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171cb80, cid 2, qid 0 00:27:41.285 [2024-07-26 02:03:23.045667] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171cd00, cid 3, qid 0 00:27:41.285 [2024-07-26 02:03:23.045674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171ce80, cid 4, qid 0 00:27:41.285 [2024-07-26 02:03:23.045840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.285 [2024-07-26 02:03:23.045855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.285 [2024-07-26 02:03:23.045866] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171ce80) on tqpair=0x16b5fe0 00:27:41.285 [2024-07-26 02:03:23.045883] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:41.285 [2024-07-26 02:03:23.045893] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:41.285 [2024-07-26 02:03:23.045911] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.045921] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b5fe0) 00:27:41.285 [2024-07-26 02:03:23.045932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.285 [2024-07-26 02:03:23.045968] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171ce80, cid 4, qid 0 00:27:41.285 [2024-07-26 02:03:23.046173] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.285 [2024-07-26 02:03:23.046187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.285 [2024-07-26 02:03:23.046194] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.046201] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b5fe0): datao=0, datal=4096, cccid=4 00:27:41.285 [2024-07-26 02:03:23.046209] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x171ce80) on tqpair(0x16b5fe0): expected_datao=0, payload_size=4096 00:27:41.285 [2024-07-26 02:03:23.046216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.046232] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.046241] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.091076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.285 [2024-07-26 02:03:23.091095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.285 [2024-07-26 02:03:23.091103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.091110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171ce80) on tqpair=0x16b5fe0 00:27:41.285 [2024-07-26 02:03:23.091131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:41.285 [2024-07-26 02:03:23.091174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.091186] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b5fe0) 00:27:41.285 [2024-07-26 02:03:23.091197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.285 [2024-07-26 02:03:23.091210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.091217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.091224] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16b5fe0) 00:27:41.285 [2024-07-26 02:03:23.091233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.285 [2024-07-26 02:03:23.091265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171ce80, cid 4, qid 0 00:27:41.285 [2024-07-26 02:03:23.091277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171d000, cid 5, qid 0 00:27:41.285 [2024-07-26 02:03:23.091448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.285 [2024-07-26 02:03:23.091463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.285 [2024-07-26 02:03:23.091470] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.091476] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b5fe0): datao=0, datal=1024, cccid=4 00:27:41.285 [2024-07-26 02:03:23.091489] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x171ce80) on tqpair(0x16b5fe0): expected_datao=0, payload_size=1024 00:27:41.285 [2024-07-26 02:03:23.091497] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.091507] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.091514] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.091523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.285 [2024-07-26 02:03:23.091532] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.285 [2024-07-26 02:03:23.091539] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.285 [2024-07-26 02:03:23.091545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171d000) on tqpair=0x16b5fe0 00:27:41.286 [2024-07-26 02:03:23.132210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.286 [2024-07-26 02:03:23.132229] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.286 [2024-07-26 02:03:23.132238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.286 [2024-07-26 02:03:23.132246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171ce80) on tqpair=0x16b5fe0 00:27:41.286 [2024-07-26 02:03:23.132264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.286 [2024-07-26 02:03:23.132274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b5fe0) 00:27:41.286 [2024-07-26 02:03:23.132286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.286 [2024-07-26 02:03:23.132316] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171ce80, cid 4, qid 0 00:27:41.286 [2024-07-26 02:03:23.132467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.286 [2024-07-26 02:03:23.132483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.286 [2024-07-26 02:03:23.132490] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.286 [2024-07-26 02:03:23.132496] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b5fe0): datao=0, datal=3072, cccid=4 00:27:41.286 [2024-07-26 02:03:23.132504] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x171ce80) on tqpair(0x16b5fe0): expected_datao=0, payload_size=3072 00:27:41.286 [2024-07-26 02:03:23.132512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.286 [2024-07-26 02:03:23.132522] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.286 [2024-07-26 02:03:23.132530] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.286 [2024-07-26 02:03:23.132549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.286 [2024-07-26 02:03:23.132560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.286 [2024-07-26 02:03:23.132567] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.286 [2024-07-26 02:03:23.132574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171ce80) on tqpair=0x16b5fe0 00:27:41.286 [2024-07-26 02:03:23.132589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.286 [2024-07-26 02:03:23.132598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b5fe0) 00:27:41.286 [2024-07-26 02:03:23.132609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.286 [2024-07-26 02:03:23.132637] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171ce80, cid 4, qid 0 00:27:41.286 [2024-07-26 02:03:23.132754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.286 [2024-07-26 02:03:23.132769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.286 [2024-07-26 02:03:23.132776] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.286 [2024-07-26 02:03:23.132782] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b5fe0): datao=0, datal=8, cccid=4 00:27:41.286 [2024-07-26 02:03:23.132790] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x171ce80) on tqpair(0x16b5fe0): expected_datao=0, payload_size=8 00:27:41.286 [2024-07-26 02:03:23.132802] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.286 [2024-07-26 02:03:23.132813] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.286 [2024-07-26 02:03:23.132820] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.286 [2024-07-26 02:03:23.173178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.286 [2024-07-26 02:03:23.173198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.286 [2024-07-26 02:03:23.173205] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.286 [2024-07-26 02:03:23.173213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171ce80) on tqpair=0x16b5fe0 00:27:41.286 ===================================================== 00:27:41.286 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:41.286 ===================================================== 00:27:41.286 Controller Capabilities/Features 00:27:41.286 ================================ 00:27:41.286 Vendor ID: 0000 00:27:41.286 Subsystem Vendor ID: 0000 00:27:41.286 Serial Number: .................... 00:27:41.286 Model Number: ........................................ 00:27:41.286 Firmware Version: 24.09 00:27:41.286 Recommended Arb Burst: 0 00:27:41.286 IEEE OUI Identifier: 00 00 00 00:27:41.286 Multi-path I/O 00:27:41.286 May have multiple subsystem ports: No 00:27:41.286 May have multiple controllers: No 00:27:41.286 Associated with SR-IOV VF: No 00:27:41.286 Max Data Transfer Size: 131072 00:27:41.286 Max Number of Namespaces: 0 00:27:41.286 Max Number of I/O Queues: 1024 00:27:41.286 NVMe Specification Version (VS): 1.3 00:27:41.286 NVMe Specification Version (Identify): 1.3 00:27:41.286 Maximum Queue Entries: 128 00:27:41.286 Contiguous Queues Required: Yes 00:27:41.286 Arbitration Mechanisms Supported 00:27:41.286 Weighted Round Robin: Not Supported 00:27:41.286 Vendor Specific: Not Supported 00:27:41.286 Reset Timeout: 15000 ms 00:27:41.286 Doorbell Stride: 4 bytes 00:27:41.286 NVM Subsystem Reset: Not Supported 00:27:41.286 Command Sets Supported 00:27:41.286 NVM Command Set: Supported 00:27:41.286 Boot Partition: Not Supported 00:27:41.286 Memory Page Size Minimum: 4096 bytes 00:27:41.286 Memory Page Size Maximum: 4096 bytes 00:27:41.286 Persistent Memory Region: Not Supported 00:27:41.286 Optional Asynchronous Events Supported 00:27:41.286 Namespace Attribute Notices: Not Supported 00:27:41.286 Firmware Activation Notices: Not Supported 00:27:41.286 ANA Change Notices: Not Supported 00:27:41.286 PLE Aggregate Log Change Notices: Not Supported 00:27:41.286 LBA Status Info Alert Notices: Not Supported 00:27:41.286 EGE Aggregate Log Change Notices: Not Supported 00:27:41.286 Normal NVM Subsystem Shutdown event: Not Supported 00:27:41.286 Zone Descriptor Change Notices: Not Supported 00:27:41.286 Discovery Log Change Notices: Supported 00:27:41.286 Controller Attributes 00:27:41.286 128-bit Host Identifier: Not Supported 00:27:41.286 Non-Operational Permissive Mode: Not Supported 00:27:41.286 NVM Sets: Not Supported 00:27:41.286 Read Recovery Levels: Not Supported 00:27:41.286 Endurance Groups: Not Supported 00:27:41.286 Predictable Latency Mode: Not Supported 00:27:41.286 Traffic Based Keep ALive: Not Supported 00:27:41.286 Namespace Granularity: Not Supported 00:27:41.286 SQ Associations: Not Supported 00:27:41.286 UUID List: Not Supported 00:27:41.286 Multi-Domain Subsystem: Not Supported 00:27:41.286 Fixed Capacity Management: Not Supported 00:27:41.286 Variable Capacity Management: Not Supported 00:27:41.286 Delete Endurance Group: Not Supported 00:27:41.286 Delete NVM Set: Not Supported 00:27:41.286 Extended LBA Formats Supported: Not Supported 00:27:41.286 Flexible Data Placement Supported: Not Supported 00:27:41.286 00:27:41.286 Controller Memory Buffer Support 00:27:41.286 ================================ 00:27:41.286 Supported: No 00:27:41.286 00:27:41.286 Persistent Memory Region Support 00:27:41.286 ================================ 00:27:41.286 Supported: No 00:27:41.286 00:27:41.286 Admin Command Set Attributes 00:27:41.286 ============================ 00:27:41.286 Security Send/Receive: Not Supported 00:27:41.286 Format NVM: Not Supported 00:27:41.286 Firmware Activate/Download: Not Supported 00:27:41.286 Namespace Management: Not Supported 00:27:41.286 Device Self-Test: Not Supported 00:27:41.286 Directives: Not Supported 00:27:41.286 NVMe-MI: Not Supported 00:27:41.286 Virtualization Management: Not Supported 00:27:41.286 Doorbell Buffer Config: Not Supported 00:27:41.286 Get LBA Status Capability: Not Supported 00:27:41.286 Command & Feature Lockdown Capability: Not Supported 00:27:41.286 Abort Command Limit: 1 00:27:41.286 Async Event Request Limit: 4 00:27:41.286 Number of Firmware Slots: N/A 00:27:41.286 Firmware Slot 1 Read-Only: N/A 00:27:41.286 Firmware Activation Without Reset: N/A 00:27:41.286 Multiple Update Detection Support: N/A 00:27:41.286 Firmware Update Granularity: No Information Provided 00:27:41.286 Per-Namespace SMART Log: No 00:27:41.286 Asymmetric Namespace Access Log Page: Not Supported 00:27:41.286 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:41.286 Command Effects Log Page: Not Supported 00:27:41.286 Get Log Page Extended Data: Supported 00:27:41.286 Telemetry Log Pages: Not Supported 00:27:41.286 Persistent Event Log Pages: Not Supported 00:27:41.286 Supported Log Pages Log Page: May Support 00:27:41.286 Commands Supported & Effects Log Page: Not Supported 00:27:41.286 Feature Identifiers & Effects Log Page:May Support 00:27:41.286 NVMe-MI Commands & Effects Log Page: May Support 00:27:41.286 Data Area 4 for Telemetry Log: Not Supported 00:27:41.286 Error Log Page Entries Supported: 128 00:27:41.286 Keep Alive: Not Supported 00:27:41.286 00:27:41.286 NVM Command Set Attributes 00:27:41.286 ========================== 00:27:41.286 Submission Queue Entry Size 00:27:41.286 Max: 1 00:27:41.286 Min: 1 00:27:41.286 Completion Queue Entry Size 00:27:41.286 Max: 1 00:27:41.286 Min: 1 00:27:41.286 Number of Namespaces: 0 00:27:41.286 Compare Command: Not Supported 00:27:41.286 Write Uncorrectable Command: Not Supported 00:27:41.286 Dataset Management Command: Not Supported 00:27:41.286 Write Zeroes Command: Not Supported 00:27:41.286 Set Features Save Field: Not Supported 00:27:41.286 Reservations: Not Supported 00:27:41.287 Timestamp: Not Supported 00:27:41.287 Copy: Not Supported 00:27:41.287 Volatile Write Cache: Not Present 00:27:41.287 Atomic Write Unit (Normal): 1 00:27:41.287 Atomic Write Unit (PFail): 1 00:27:41.287 Atomic Compare & Write Unit: 1 00:27:41.287 Fused Compare & Write: Supported 00:27:41.287 Scatter-Gather List 00:27:41.287 SGL Command Set: Supported 00:27:41.287 SGL Keyed: Supported 00:27:41.287 SGL Bit Bucket Descriptor: Not Supported 00:27:41.287 SGL Metadata Pointer: Not Supported 00:27:41.287 Oversized SGL: Not Supported 00:27:41.287 SGL Metadata Address: Not Supported 00:27:41.287 SGL Offset: Supported 00:27:41.287 Transport SGL Data Block: Not Supported 00:27:41.287 Replay Protected Memory Block: Not Supported 00:27:41.287 00:27:41.287 Firmware Slot Information 00:27:41.287 ========================= 00:27:41.287 Active slot: 0 00:27:41.287 00:27:41.287 00:27:41.287 Error Log 00:27:41.287 ========= 00:27:41.287 00:27:41.287 Active Namespaces 00:27:41.287 ================= 00:27:41.287 Discovery Log Page 00:27:41.287 ================== 00:27:41.287 Generation Counter: 2 00:27:41.287 Number of Records: 2 00:27:41.287 Record Format: 0 00:27:41.287 00:27:41.287 Discovery Log Entry 0 00:27:41.287 ---------------------- 00:27:41.287 Transport Type: 3 (TCP) 00:27:41.287 Address Family: 1 (IPv4) 00:27:41.287 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:41.287 Entry Flags: 00:27:41.287 Duplicate Returned Information: 1 00:27:41.287 Explicit Persistent Connection Support for Discovery: 1 00:27:41.287 Transport Requirements: 00:27:41.287 Secure Channel: Not Required 00:27:41.287 Port ID: 0 (0x0000) 00:27:41.287 Controller ID: 65535 (0xffff) 00:27:41.287 Admin Max SQ Size: 128 00:27:41.287 Transport Service Identifier: 4420 00:27:41.287 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:41.287 Transport Address: 10.0.0.2 00:27:41.287 Discovery Log Entry 1 00:27:41.287 ---------------------- 00:27:41.287 Transport Type: 3 (TCP) 00:27:41.287 Address Family: 1 (IPv4) 00:27:41.287 Subsystem Type: 2 (NVM Subsystem) 00:27:41.287 Entry Flags: 00:27:41.287 Duplicate Returned Information: 0 00:27:41.287 Explicit Persistent Connection Support for Discovery: 0 00:27:41.287 Transport Requirements: 00:27:41.287 Secure Channel: Not Required 00:27:41.287 Port ID: 0 (0x0000) 00:27:41.287 Controller ID: 65535 (0xffff) 00:27:41.287 Admin Max SQ Size: 128 00:27:41.287 Transport Service Identifier: 4420 00:27:41.287 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:41.287 Transport Address: 10.0.0.2 [2024-07-26 02:03:23.173331] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:41.287 [2024-07-26 02:03:23.173355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171c880) on tqpair=0x16b5fe0 00:27:41.287 [2024-07-26 02:03:23.173368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.287 [2024-07-26 02:03:23.173377] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171ca00) on tqpair=0x16b5fe0 00:27:41.287 [2024-07-26 02:03:23.173385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.287 [2024-07-26 02:03:23.173394] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171cb80) on tqpair=0x16b5fe0 00:27:41.287 [2024-07-26 02:03:23.173401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.287 [2024-07-26 02:03:23.173409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171cd00) on tqpair=0x16b5fe0 00:27:41.287 [2024-07-26 02:03:23.173417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.287 [2024-07-26 02:03:23.173435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.287 [2024-07-26 02:03:23.173445] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.287 [2024-07-26 02:03:23.173467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b5fe0) 00:27:41.287 [2024-07-26 02:03:23.173478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.287 [2024-07-26 02:03:23.173503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171cd00, cid 3, qid 0 00:27:41.287 [2024-07-26 02:03:23.173641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.287 [2024-07-26 02:03:23.173656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.287 [2024-07-26 02:03:23.173663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.287 [2024-07-26 02:03:23.173670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171cd00) on tqpair=0x16b5fe0 00:27:41.287 [2024-07-26 02:03:23.173682] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.287 [2024-07-26 02:03:23.173691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.287 [2024-07-26 02:03:23.173697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b5fe0) 00:27:41.287 [2024-07-26 02:03:23.173708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.287 [2024-07-26 02:03:23.173735] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171cd00, cid 3, qid 0 00:27:41.287 [2024-07-26 02:03:23.173855] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.287 [2024-07-26 02:03:23.173870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.287 [2024-07-26 02:03:23.173877] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.287 [2024-07-26 02:03:23.173884] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171cd00) on tqpair=0x16b5fe0 00:27:41.287 [2024-07-26 02:03:23.173894] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:41.287 [2024-07-26 02:03:23.173907] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:41.287 [2024-07-26 02:03:23.173924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.287 [2024-07-26 02:03:23.173933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.287 [2024-07-26 02:03:23.173940] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b5fe0) 00:27:41.287 [2024-07-26 02:03:23.173951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.287 [2024-07-26 02:03:23.173972] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171cd00, cid 3, qid 0 00:27:41.287 [2024-07-26 02:03:23.178089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.287 [2024-07-26 02:03:23.178106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.287 [2024-07-26 02:03:23.178113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.287 [2024-07-26 02:03:23.178120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171cd00) on tqpair=0x16b5fe0 00:27:41.287 [2024-07-26 02:03:23.178139] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.287 [2024-07-26 02:03:23.178149] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.287 [2024-07-26 02:03:23.178156] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b5fe0) 00:27:41.287 [2024-07-26 02:03:23.178167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.287 [2024-07-26 02:03:23.178190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x171cd00, cid 3, qid 0 00:27:41.287 [2024-07-26 02:03:23.178329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.287 [2024-07-26 02:03:23.178341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.287 [2024-07-26 02:03:23.178348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.287 [2024-07-26 02:03:23.178355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x171cd00) on tqpair=0x16b5fe0 00:27:41.287 [2024-07-26 02:03:23.178368] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:27:41.287 00:27:41.287 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:41.287 [2024-07-26 02:03:23.215147] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:27:41.287 [2024-07-26 02:03:23.215198] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358968 ] 00:27:41.287 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.287 [2024-07-26 02:03:23.253467] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:41.287 [2024-07-26 02:03:23.253520] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:41.287 [2024-07-26 02:03:23.253530] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:41.287 [2024-07-26 02:03:23.253545] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:41.287 [2024-07-26 02:03:23.253558] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:41.287 [2024-07-26 02:03:23.253800] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:41.287 [2024-07-26 02:03:23.253844] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1cc2fe0 0 00:27:41.287 [2024-07-26 02:03:23.268091] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:41.287 [2024-07-26 02:03:23.268122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:41.287 [2024-07-26 02:03:23.268133] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:41.287 [2024-07-26 02:03:23.268139] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:41.288 [2024-07-26 02:03:23.268182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.268194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.268201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cc2fe0) 00:27:41.288 [2024-07-26 02:03:23.268218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:41.288 [2024-07-26 02:03:23.268245] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29880, cid 0, qid 0 00:27:41.288 [2024-07-26 02:03:23.279087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.288 [2024-07-26 02:03:23.279114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.288 [2024-07-26 02:03:23.279122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.279130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29880) on tqpair=0x1cc2fe0 00:27:41.288 [2024-07-26 02:03:23.279150] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:41.288 [2024-07-26 02:03:23.279162] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:41.288 [2024-07-26 02:03:23.279172] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:41.288 [2024-07-26 02:03:23.279192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.279201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.279208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cc2fe0) 00:27:41.288 [2024-07-26 02:03:23.279220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.288 [2024-07-26 02:03:23.279245] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29880, cid 0, qid 0 00:27:41.288 [2024-07-26 02:03:23.279392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.288 [2024-07-26 02:03:23.279408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.288 [2024-07-26 02:03:23.279415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.279422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29880) on tqpair=0x1cc2fe0 00:27:41.288 [2024-07-26 02:03:23.279435] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:41.288 [2024-07-26 02:03:23.279451] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:41.288 [2024-07-26 02:03:23.279465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.279474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.279482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cc2fe0) 00:27:41.288 [2024-07-26 02:03:23.279495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.288 [2024-07-26 02:03:23.279516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29880, cid 0, qid 0 00:27:41.288 [2024-07-26 02:03:23.279618] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.288 [2024-07-26 02:03:23.279633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.288 [2024-07-26 02:03:23.279640] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.279651] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29880) on tqpair=0x1cc2fe0 00:27:41.288 [2024-07-26 02:03:23.279660] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:41.288 [2024-07-26 02:03:23.279675] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:41.288 [2024-07-26 02:03:23.279687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.279695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.279701] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cc2fe0) 00:27:41.288 [2024-07-26 02:03:23.279712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.288 [2024-07-26 02:03:23.279734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29880, cid 0, qid 0 00:27:41.288 [2024-07-26 02:03:23.279837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.288 [2024-07-26 02:03:23.279849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.288 [2024-07-26 02:03:23.279857] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.279864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29880) on tqpair=0x1cc2fe0 00:27:41.288 [2024-07-26 02:03:23.279873] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:41.288 [2024-07-26 02:03:23.279890] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.279900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.279907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cc2fe0) 00:27:41.288 [2024-07-26 02:03:23.279918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.288 [2024-07-26 02:03:23.279939] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29880, cid 0, qid 0 00:27:41.288 [2024-07-26 02:03:23.280039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.288 [2024-07-26 02:03:23.280052] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.288 [2024-07-26 02:03:23.280067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.280075] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29880) on tqpair=0x1cc2fe0 00:27:41.288 [2024-07-26 02:03:23.280084] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:41.288 [2024-07-26 02:03:23.280094] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:41.288 [2024-07-26 02:03:23.280111] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:41.288 [2024-07-26 02:03:23.280221] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:41.288 [2024-07-26 02:03:23.280229] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:41.288 [2024-07-26 02:03:23.280242] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.280250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.280256] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cc2fe0) 00:27:41.288 [2024-07-26 02:03:23.280267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.288 [2024-07-26 02:03:23.280289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29880, cid 0, qid 0 00:27:41.288 [2024-07-26 02:03:23.280431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.288 [2024-07-26 02:03:23.280447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.288 [2024-07-26 02:03:23.280455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.280462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29880) on tqpair=0x1cc2fe0 00:27:41.288 [2024-07-26 02:03:23.280470] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:41.288 [2024-07-26 02:03:23.280487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.280496] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.288 [2024-07-26 02:03:23.280502] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cc2fe0) 00:27:41.288 [2024-07-26 02:03:23.280513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.289 [2024-07-26 02:03:23.280534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29880, cid 0, qid 0 00:27:41.289 [2024-07-26 02:03:23.280630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.289 [2024-07-26 02:03:23.280642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.289 [2024-07-26 02:03:23.280649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.289 [2024-07-26 02:03:23.280656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29880) on tqpair=0x1cc2fe0 00:27:41.289 [2024-07-26 02:03:23.280664] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:41.289 [2024-07-26 02:03:23.280673] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:41.289 [2024-07-26 02:03:23.280686] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:41.289 [2024-07-26 02:03:23.280704] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:41.289 [2024-07-26 02:03:23.280719] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.289 [2024-07-26 02:03:23.280727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cc2fe0) 00:27:41.289 [2024-07-26 02:03:23.280738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.289 [2024-07-26 02:03:23.280759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29880, cid 0, qid 0 00:27:41.289 [2024-07-26 02:03:23.280905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.289 [2024-07-26 02:03:23.280918] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.289 [2024-07-26 02:03:23.280925] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.289 [2024-07-26 02:03:23.280931] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cc2fe0): datao=0, datal=4096, cccid=0 00:27:41.289 [2024-07-26 02:03:23.280939] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d29880) on tqpair(0x1cc2fe0): expected_datao=0, payload_size=4096 00:27:41.289 [2024-07-26 02:03:23.280947] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.289 [2024-07-26 02:03:23.280964] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.289 [2024-07-26 02:03:23.280974] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.549 [2024-07-26 02:03:23.322072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.550 [2024-07-26 02:03:23.322091] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.550 [2024-07-26 02:03:23.322099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.322106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29880) on tqpair=0x1cc2fe0 00:27:41.550 [2024-07-26 02:03:23.322119] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:41.550 [2024-07-26 02:03:23.322133] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:41.550 [2024-07-26 02:03:23.322141] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:41.550 [2024-07-26 02:03:23.322149] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:41.550 [2024-07-26 02:03:23.322158] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:41.550 [2024-07-26 02:03:23.322167] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:41.550 [2024-07-26 02:03:23.322183] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:41.550 [2024-07-26 02:03:23.322201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.322210] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.322217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cc2fe0) 00:27:41.550 [2024-07-26 02:03:23.322229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:41.550 [2024-07-26 02:03:23.322253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29880, cid 0, qid 0 00:27:41.550 [2024-07-26 02:03:23.322392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.550 [2024-07-26 02:03:23.322407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.550 [2024-07-26 02:03:23.322414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.322421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29880) on tqpair=0x1cc2fe0 00:27:41.550 [2024-07-26 02:03:23.322433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.322441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.322447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cc2fe0) 00:27:41.550 [2024-07-26 02:03:23.322458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.550 [2024-07-26 02:03:23.322468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.322475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.322481] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1cc2fe0) 00:27:41.550 [2024-07-26 02:03:23.322490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.550 [2024-07-26 02:03:23.322500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.322507] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.322514] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1cc2fe0) 00:27:41.550 [2024-07-26 02:03:23.322522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.550 [2024-07-26 02:03:23.322532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.322539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.322546] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.550 [2024-07-26 02:03:23.322554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.550 [2024-07-26 02:03:23.322563] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:41.550 [2024-07-26 02:03:23.322585] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:41.550 [2024-07-26 02:03:23.322601] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.322609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cc2fe0) 00:27:41.550 [2024-07-26 02:03:23.322621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.550 [2024-07-26 02:03:23.322644] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29880, cid 0, qid 0 00:27:41.550 [2024-07-26 02:03:23.322655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29a00, cid 1, qid 0 00:27:41.550 [2024-07-26 02:03:23.322663] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29b80, cid 2, qid 0 00:27:41.550 [2024-07-26 02:03:23.322671] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.550 [2024-07-26 02:03:23.322679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29e80, cid 4, qid 0 00:27:41.550 [2024-07-26 02:03:23.322836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.550 [2024-07-26 02:03:23.322851] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.550 [2024-07-26 02:03:23.322858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.322865] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29e80) on tqpair=0x1cc2fe0 00:27:41.550 [2024-07-26 02:03:23.322875] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:41.550 [2024-07-26 02:03:23.322884] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:41.550 [2024-07-26 02:03:23.322903] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:41.550 [2024-07-26 02:03:23.322916] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:41.550 [2024-07-26 02:03:23.322928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.322935] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.322942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cc2fe0) 00:27:41.550 [2024-07-26 02:03:23.322952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:41.550 [2024-07-26 02:03:23.322974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29e80, cid 4, qid 0 00:27:41.550 [2024-07-26 02:03:23.323116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.550 [2024-07-26 02:03:23.323130] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.550 [2024-07-26 02:03:23.323137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.323144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29e80) on tqpair=0x1cc2fe0 00:27:41.550 [2024-07-26 02:03:23.323215] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:41.550 [2024-07-26 02:03:23.323237] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:41.550 [2024-07-26 02:03:23.323252] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.323260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cc2fe0) 00:27:41.550 [2024-07-26 02:03:23.323271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.550 [2024-07-26 02:03:23.323293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29e80, cid 4, qid 0 00:27:41.550 [2024-07-26 02:03:23.323444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.550 [2024-07-26 02:03:23.323460] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.550 [2024-07-26 02:03:23.323468] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.323475] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cc2fe0): datao=0, datal=4096, cccid=4 00:27:41.550 [2024-07-26 02:03:23.323483] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d29e80) on tqpair(0x1cc2fe0): expected_datao=0, payload_size=4096 00:27:41.550 [2024-07-26 02:03:23.323490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.323501] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.323508] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.323520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.550 [2024-07-26 02:03:23.323529] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.550 [2024-07-26 02:03:23.323536] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.323543] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29e80) on tqpair=0x1cc2fe0 00:27:41.550 [2024-07-26 02:03:23.323560] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:41.550 [2024-07-26 02:03:23.323578] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:41.550 [2024-07-26 02:03:23.323597] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:41.550 [2024-07-26 02:03:23.323610] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.323618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cc2fe0) 00:27:41.550 [2024-07-26 02:03:23.323629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.550 [2024-07-26 02:03:23.323651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29e80, cid 4, qid 0 00:27:41.550 [2024-07-26 02:03:23.323787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.550 [2024-07-26 02:03:23.323802] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.550 [2024-07-26 02:03:23.323809] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.323816] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cc2fe0): datao=0, datal=4096, cccid=4 00:27:41.550 [2024-07-26 02:03:23.323823] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d29e80) on tqpair(0x1cc2fe0): expected_datao=0, payload_size=4096 00:27:41.550 [2024-07-26 02:03:23.323831] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.323841] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.323848] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.323860] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.550 [2024-07-26 02:03:23.323870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.550 [2024-07-26 02:03:23.323876] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.323883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29e80) on tqpair=0x1cc2fe0 00:27:41.550 [2024-07-26 02:03:23.323908] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:41.550 [2024-07-26 02:03:23.323928] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:41.550 [2024-07-26 02:03:23.323942] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.323949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cc2fe0) 00:27:41.550 [2024-07-26 02:03:23.323960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.550 [2024-07-26 02:03:23.323986] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29e80, cid 4, qid 0 00:27:41.550 [2024-07-26 02:03:23.324109] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.550 [2024-07-26 02:03:23.324125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.550 [2024-07-26 02:03:23.324132] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.550 [2024-07-26 02:03:23.324138] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cc2fe0): datao=0, datal=4096, cccid=4 00:27:41.551 [2024-07-26 02:03:23.324146] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d29e80) on tqpair(0x1cc2fe0): expected_datao=0, payload_size=4096 00:27:41.551 [2024-07-26 02:03:23.324154] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.324164] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.324171] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.324183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.551 [2024-07-26 02:03:23.324193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.551 [2024-07-26 02:03:23.324199] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.324206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29e80) on tqpair=0x1cc2fe0 00:27:41.551 [2024-07-26 02:03:23.324220] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:41.551 [2024-07-26 02:03:23.324236] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:41.551 [2024-07-26 02:03:23.324251] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:41.551 [2024-07-26 02:03:23.324265] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:41.551 [2024-07-26 02:03:23.324275] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:41.551 [2024-07-26 02:03:23.324284] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:41.551 [2024-07-26 02:03:23.324294] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:41.551 [2024-07-26 02:03:23.324302] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:41.551 [2024-07-26 02:03:23.324311] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:41.551 [2024-07-26 02:03:23.324333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.324342] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cc2fe0) 00:27:41.551 [2024-07-26 02:03:23.324353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.551 [2024-07-26 02:03:23.324364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.324372] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.324378] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cc2fe0) 00:27:41.551 [2024-07-26 02:03:23.324388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.551 [2024-07-26 02:03:23.324414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29e80, cid 4, qid 0 00:27:41.551 [2024-07-26 02:03:23.324426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d2a000, cid 5, qid 0 00:27:41.551 [2024-07-26 02:03:23.324568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.551 [2024-07-26 02:03:23.324584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.551 [2024-07-26 02:03:23.324591] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.324598] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29e80) on tqpair=0x1cc2fe0 00:27:41.551 [2024-07-26 02:03:23.324608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.551 [2024-07-26 02:03:23.324617] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.551 [2024-07-26 02:03:23.324623] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.324630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d2a000) on tqpair=0x1cc2fe0 00:27:41.551 [2024-07-26 02:03:23.324646] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.324656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cc2fe0) 00:27:41.551 [2024-07-26 02:03:23.324667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.551 [2024-07-26 02:03:23.324688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d2a000, cid 5, qid 0 00:27:41.551 [2024-07-26 02:03:23.324786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.551 [2024-07-26 02:03:23.324801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.551 [2024-07-26 02:03:23.324808] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.324815] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d2a000) on tqpair=0x1cc2fe0 00:27:41.551 [2024-07-26 02:03:23.324831] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.324840] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cc2fe0) 00:27:41.551 [2024-07-26 02:03:23.324851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.551 [2024-07-26 02:03:23.324872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d2a000, cid 5, qid 0 00:27:41.551 [2024-07-26 02:03:23.324972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.551 [2024-07-26 02:03:23.324984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.551 [2024-07-26 02:03:23.324991] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.324998] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d2a000) on tqpair=0x1cc2fe0 00:27:41.551 [2024-07-26 02:03:23.325013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cc2fe0) 00:27:41.551 [2024-07-26 02:03:23.325033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.551 [2024-07-26 02:03:23.325053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d2a000, cid 5, qid 0 00:27:41.551 [2024-07-26 02:03:23.325157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.551 [2024-07-26 02:03:23.325170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.551 [2024-07-26 02:03:23.325177] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325184] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d2a000) on tqpair=0x1cc2fe0 00:27:41.551 [2024-07-26 02:03:23.325209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cc2fe0) 00:27:41.551 [2024-07-26 02:03:23.325232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.551 [2024-07-26 02:03:23.325244] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325255] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cc2fe0) 00:27:41.551 [2024-07-26 02:03:23.325266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.551 [2024-07-26 02:03:23.325278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1cc2fe0) 00:27:41.551 [2024-07-26 02:03:23.325295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.551 [2024-07-26 02:03:23.325307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325315] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1cc2fe0) 00:27:41.551 [2024-07-26 02:03:23.325324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.551 [2024-07-26 02:03:23.325347] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d2a000, cid 5, qid 0 00:27:41.551 [2024-07-26 02:03:23.325358] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29e80, cid 4, qid 0 00:27:41.551 [2024-07-26 02:03:23.325366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d2a180, cid 6, qid 0 00:27:41.551 [2024-07-26 02:03:23.325373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d2a300, cid 7, qid 0 00:27:41.551 [2024-07-26 02:03:23.325585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.551 [2024-07-26 02:03:23.325600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.551 [2024-07-26 02:03:23.325607] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325614] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cc2fe0): datao=0, datal=8192, cccid=5 00:27:41.551 [2024-07-26 02:03:23.325622] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d2a000) on tqpair(0x1cc2fe0): expected_datao=0, payload_size=8192 00:27:41.551 [2024-07-26 02:03:23.325629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325669] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325679] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.551 [2024-07-26 02:03:23.325697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.551 [2024-07-26 02:03:23.325704] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325711] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cc2fe0): datao=0, datal=512, cccid=4 00:27:41.551 [2024-07-26 02:03:23.325718] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d29e80) on tqpair(0x1cc2fe0): expected_datao=0, payload_size=512 00:27:41.551 [2024-07-26 02:03:23.325726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325735] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325742] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.551 [2024-07-26 02:03:23.325759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.551 [2024-07-26 02:03:23.325766] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325772] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cc2fe0): datao=0, datal=512, cccid=6 00:27:41.551 [2024-07-26 02:03:23.325780] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d2a180) on tqpair(0x1cc2fe0): expected_datao=0, payload_size=512 00:27:41.551 [2024-07-26 02:03:23.325787] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325796] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325807] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.551 [2024-07-26 02:03:23.325825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.551 [2024-07-26 02:03:23.325832] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325838] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cc2fe0): datao=0, datal=4096, cccid=7 00:27:41.551 [2024-07-26 02:03:23.325846] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d2a300) on tqpair(0x1cc2fe0): expected_datao=0, payload_size=4096 00:27:41.551 [2024-07-26 02:03:23.325853] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325862] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325870] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.551 [2024-07-26 02:03:23.325881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.551 [2024-07-26 02:03:23.325891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.551 [2024-07-26 02:03:23.325898] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.552 [2024-07-26 02:03:23.325904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d2a000) on tqpair=0x1cc2fe0 00:27:41.552 [2024-07-26 02:03:23.325923] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.552 [2024-07-26 02:03:23.325934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.552 [2024-07-26 02:03:23.325941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.552 [2024-07-26 02:03:23.325947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29e80) on tqpair=0x1cc2fe0 00:27:41.552 [2024-07-26 02:03:23.325965] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.552 [2024-07-26 02:03:23.325976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.552 [2024-07-26 02:03:23.325982] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.552 [2024-07-26 02:03:23.325989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d2a180) on tqpair=0x1cc2fe0 00:27:41.552 [2024-07-26 02:03:23.326000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.552 [2024-07-26 02:03:23.326024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.552 [2024-07-26 02:03:23.326031] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.552 [2024-07-26 02:03:23.326038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d2a300) on tqpair=0x1cc2fe0 00:27:41.552 ===================================================== 00:27:41.552 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:41.552 ===================================================== 00:27:41.552 Controller Capabilities/Features 00:27:41.552 ================================ 00:27:41.552 Vendor ID: 8086 00:27:41.552 Subsystem Vendor ID: 8086 00:27:41.552 Serial Number: SPDK00000000000001 00:27:41.552 Model Number: SPDK bdev Controller 00:27:41.552 Firmware Version: 24.09 00:27:41.552 Recommended Arb Burst: 6 00:27:41.552 IEEE OUI Identifier: e4 d2 5c 00:27:41.552 Multi-path I/O 00:27:41.552 May have multiple subsystem ports: Yes 00:27:41.552 May have multiple controllers: Yes 00:27:41.552 Associated with SR-IOV VF: No 00:27:41.552 Max Data Transfer Size: 131072 00:27:41.552 Max Number of Namespaces: 32 00:27:41.552 Max Number of I/O Queues: 127 00:27:41.552 NVMe Specification Version (VS): 1.3 00:27:41.552 NVMe Specification Version (Identify): 1.3 00:27:41.552 Maximum Queue Entries: 128 00:27:41.552 Contiguous Queues Required: Yes 00:27:41.552 Arbitration Mechanisms Supported 00:27:41.552 Weighted Round Robin: Not Supported 00:27:41.552 Vendor Specific: Not Supported 00:27:41.552 Reset Timeout: 15000 ms 00:27:41.552 Doorbell Stride: 4 bytes 00:27:41.552 NVM Subsystem Reset: Not Supported 00:27:41.552 Command Sets Supported 00:27:41.552 NVM Command Set: Supported 00:27:41.552 Boot Partition: Not Supported 00:27:41.552 Memory Page Size Minimum: 4096 bytes 00:27:41.552 Memory Page Size Maximum: 4096 bytes 00:27:41.552 Persistent Memory Region: Not Supported 00:27:41.552 Optional Asynchronous Events Supported 00:27:41.552 Namespace Attribute Notices: Supported 00:27:41.552 Firmware Activation Notices: Not Supported 00:27:41.552 ANA Change Notices: Not Supported 00:27:41.552 PLE Aggregate Log Change Notices: Not Supported 00:27:41.552 LBA Status Info Alert Notices: Not Supported 00:27:41.552 EGE Aggregate Log Change Notices: Not Supported 00:27:41.552 Normal NVM Subsystem Shutdown event: Not Supported 00:27:41.552 Zone Descriptor Change Notices: Not Supported 00:27:41.552 Discovery Log Change Notices: Not Supported 00:27:41.552 Controller Attributes 00:27:41.552 128-bit Host Identifier: Supported 00:27:41.552 Non-Operational Permissive Mode: Not Supported 00:27:41.552 NVM Sets: Not Supported 00:27:41.552 Read Recovery Levels: Not Supported 00:27:41.552 Endurance Groups: Not Supported 00:27:41.552 Predictable Latency Mode: Not Supported 00:27:41.552 Traffic Based Keep ALive: Not Supported 00:27:41.552 Namespace Granularity: Not Supported 00:27:41.552 SQ Associations: Not Supported 00:27:41.552 UUID List: Not Supported 00:27:41.552 Multi-Domain Subsystem: Not Supported 00:27:41.552 Fixed Capacity Management: Not Supported 00:27:41.552 Variable Capacity Management: Not Supported 00:27:41.552 Delete Endurance Group: Not Supported 00:27:41.552 Delete NVM Set: Not Supported 00:27:41.552 Extended LBA Formats Supported: Not Supported 00:27:41.552 Flexible Data Placement Supported: Not Supported 00:27:41.552 00:27:41.552 Controller Memory Buffer Support 00:27:41.552 ================================ 00:27:41.552 Supported: No 00:27:41.552 00:27:41.552 Persistent Memory Region Support 00:27:41.552 ================================ 00:27:41.552 Supported: No 00:27:41.552 00:27:41.552 Admin Command Set Attributes 00:27:41.552 ============================ 00:27:41.552 Security Send/Receive: Not Supported 00:27:41.552 Format NVM: Not Supported 00:27:41.552 Firmware Activate/Download: Not Supported 00:27:41.552 Namespace Management: Not Supported 00:27:41.552 Device Self-Test: Not Supported 00:27:41.552 Directives: Not Supported 00:27:41.552 NVMe-MI: Not Supported 00:27:41.552 Virtualization Management: Not Supported 00:27:41.552 Doorbell Buffer Config: Not Supported 00:27:41.552 Get LBA Status Capability: Not Supported 00:27:41.552 Command & Feature Lockdown Capability: Not Supported 00:27:41.552 Abort Command Limit: 4 00:27:41.552 Async Event Request Limit: 4 00:27:41.552 Number of Firmware Slots: N/A 00:27:41.552 Firmware Slot 1 Read-Only: N/A 00:27:41.552 Firmware Activation Without Reset: N/A 00:27:41.552 Multiple Update Detection Support: N/A 00:27:41.552 Firmware Update Granularity: No Information Provided 00:27:41.552 Per-Namespace SMART Log: No 00:27:41.552 Asymmetric Namespace Access Log Page: Not Supported 00:27:41.552 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:41.552 Command Effects Log Page: Supported 00:27:41.552 Get Log Page Extended Data: Supported 00:27:41.552 Telemetry Log Pages: Not Supported 00:27:41.552 Persistent Event Log Pages: Not Supported 00:27:41.552 Supported Log Pages Log Page: May Support 00:27:41.552 Commands Supported & Effects Log Page: Not Supported 00:27:41.552 Feature Identifiers & Effects Log Page:May Support 00:27:41.552 NVMe-MI Commands & Effects Log Page: May Support 00:27:41.552 Data Area 4 for Telemetry Log: Not Supported 00:27:41.552 Error Log Page Entries Supported: 128 00:27:41.552 Keep Alive: Supported 00:27:41.552 Keep Alive Granularity: 10000 ms 00:27:41.552 00:27:41.552 NVM Command Set Attributes 00:27:41.552 ========================== 00:27:41.552 Submission Queue Entry Size 00:27:41.552 Max: 64 00:27:41.552 Min: 64 00:27:41.552 Completion Queue Entry Size 00:27:41.552 Max: 16 00:27:41.552 Min: 16 00:27:41.552 Number of Namespaces: 32 00:27:41.552 Compare Command: Supported 00:27:41.552 Write Uncorrectable Command: Not Supported 00:27:41.552 Dataset Management Command: Supported 00:27:41.552 Write Zeroes Command: Supported 00:27:41.552 Set Features Save Field: Not Supported 00:27:41.552 Reservations: Supported 00:27:41.552 Timestamp: Not Supported 00:27:41.552 Copy: Supported 00:27:41.552 Volatile Write Cache: Present 00:27:41.552 Atomic Write Unit (Normal): 1 00:27:41.552 Atomic Write Unit (PFail): 1 00:27:41.552 Atomic Compare & Write Unit: 1 00:27:41.552 Fused Compare & Write: Supported 00:27:41.552 Scatter-Gather List 00:27:41.552 SGL Command Set: Supported 00:27:41.552 SGL Keyed: Supported 00:27:41.552 SGL Bit Bucket Descriptor: Not Supported 00:27:41.552 SGL Metadata Pointer: Not Supported 00:27:41.552 Oversized SGL: Not Supported 00:27:41.552 SGL Metadata Address: Not Supported 00:27:41.552 SGL Offset: Supported 00:27:41.552 Transport SGL Data Block: Not Supported 00:27:41.552 Replay Protected Memory Block: Not Supported 00:27:41.552 00:27:41.552 Firmware Slot Information 00:27:41.552 ========================= 00:27:41.552 Active slot: 1 00:27:41.552 Slot 1 Firmware Revision: 24.09 00:27:41.552 00:27:41.552 00:27:41.552 Commands Supported and Effects 00:27:41.552 ============================== 00:27:41.552 Admin Commands 00:27:41.552 -------------- 00:27:41.552 Get Log Page (02h): Supported 00:27:41.552 Identify (06h): Supported 00:27:41.552 Abort (08h): Supported 00:27:41.552 Set Features (09h): Supported 00:27:41.552 Get Features (0Ah): Supported 00:27:41.552 Asynchronous Event Request (0Ch): Supported 00:27:41.552 Keep Alive (18h): Supported 00:27:41.552 I/O Commands 00:27:41.552 ------------ 00:27:41.552 Flush (00h): Supported LBA-Change 00:27:41.552 Write (01h): Supported LBA-Change 00:27:41.552 Read (02h): Supported 00:27:41.552 Compare (05h): Supported 00:27:41.552 Write Zeroes (08h): Supported LBA-Change 00:27:41.552 Dataset Management (09h): Supported LBA-Change 00:27:41.552 Copy (19h): Supported LBA-Change 00:27:41.552 00:27:41.552 Error Log 00:27:41.552 ========= 00:27:41.552 00:27:41.552 Arbitration 00:27:41.552 =========== 00:27:41.552 Arbitration Burst: 1 00:27:41.552 00:27:41.552 Power Management 00:27:41.552 ================ 00:27:41.552 Number of Power States: 1 00:27:41.553 Current Power State: Power State #0 00:27:41.553 Power State #0: 00:27:41.553 Max Power: 0.00 W 00:27:41.553 Non-Operational State: Operational 00:27:41.553 Entry Latency: Not Reported 00:27:41.553 Exit Latency: Not Reported 00:27:41.553 Relative Read Throughput: 0 00:27:41.553 Relative Read Latency: 0 00:27:41.553 Relative Write Throughput: 0 00:27:41.553 Relative Write Latency: 0 00:27:41.553 Idle Power: Not Reported 00:27:41.553 Active Power: Not Reported 00:27:41.553 Non-Operational Permissive Mode: Not Supported 00:27:41.553 00:27:41.553 Health Information 00:27:41.553 ================== 00:27:41.553 Critical Warnings: 00:27:41.553 Available Spare Space: OK 00:27:41.553 Temperature: OK 00:27:41.553 Device Reliability: OK 00:27:41.553 Read Only: No 00:27:41.553 Volatile Memory Backup: OK 00:27:41.553 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:41.553 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:41.553 Available Spare: 0% 00:27:41.553 Available Spare Threshold: 0% 00:27:41.553 Life Percentage Used:[2024-07-26 02:03:23.330186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.330200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1cc2fe0) 00:27:41.553 [2024-07-26 02:03:23.330212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.553 [2024-07-26 02:03:23.330236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d2a300, cid 7, qid 0 00:27:41.553 [2024-07-26 02:03:23.330388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.553 [2024-07-26 02:03:23.330403] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.553 [2024-07-26 02:03:23.330410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.330417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d2a300) on tqpair=0x1cc2fe0 00:27:41.553 [2024-07-26 02:03:23.330467] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:41.553 [2024-07-26 02:03:23.330489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29880) on tqpair=0x1cc2fe0 00:27:41.553 [2024-07-26 02:03:23.330501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.553 [2024-07-26 02:03:23.330511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29a00) on tqpair=0x1cc2fe0 00:27:41.553 [2024-07-26 02:03:23.330522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.553 [2024-07-26 02:03:23.330531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29b80) on tqpair=0x1cc2fe0 00:27:41.553 [2024-07-26 02:03:23.330539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.553 [2024-07-26 02:03:23.330548] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.553 [2024-07-26 02:03:23.330555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.553 [2024-07-26 02:03:23.330569] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.330577] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.330584] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.553 [2024-07-26 02:03:23.330595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.553 [2024-07-26 02:03:23.330618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.553 [2024-07-26 02:03:23.330749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.553 [2024-07-26 02:03:23.330761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.553 [2024-07-26 02:03:23.330768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.330775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.553 [2024-07-26 02:03:23.330786] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.330794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.330801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.553 [2024-07-26 02:03:23.330811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.553 [2024-07-26 02:03:23.330837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.553 [2024-07-26 02:03:23.330948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.553 [2024-07-26 02:03:23.330960] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.553 [2024-07-26 02:03:23.330967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.330974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.553 [2024-07-26 02:03:23.330983] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:41.553 [2024-07-26 02:03:23.330991] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:41.553 [2024-07-26 02:03:23.331006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.331016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.331022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.553 [2024-07-26 02:03:23.331033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.553 [2024-07-26 02:03:23.331053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.553 [2024-07-26 02:03:23.331165] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.553 [2024-07-26 02:03:23.331178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.553 [2024-07-26 02:03:23.331184] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.331191] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.553 [2024-07-26 02:03:23.331213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.331226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.331234] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.553 [2024-07-26 02:03:23.331244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.553 [2024-07-26 02:03:23.331265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.553 [2024-07-26 02:03:23.331367] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.553 [2024-07-26 02:03:23.331381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.553 [2024-07-26 02:03:23.331388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.331395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.553 [2024-07-26 02:03:23.331412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.331422] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.331428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.553 [2024-07-26 02:03:23.331439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.553 [2024-07-26 02:03:23.331460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.553 [2024-07-26 02:03:23.331559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.553 [2024-07-26 02:03:23.331570] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.553 [2024-07-26 02:03:23.331577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.331584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.553 [2024-07-26 02:03:23.331600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.331609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.331616] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.553 [2024-07-26 02:03:23.331627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.553 [2024-07-26 02:03:23.331647] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.553 [2024-07-26 02:03:23.331744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.553 [2024-07-26 02:03:23.331759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.553 [2024-07-26 02:03:23.331766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.331772] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.553 [2024-07-26 02:03:23.331789] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.331798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.331805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.553 [2024-07-26 02:03:23.331816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.553 [2024-07-26 02:03:23.331836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.553 [2024-07-26 02:03:23.331936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.553 [2024-07-26 02:03:23.331950] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.553 [2024-07-26 02:03:23.331957] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.331964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.553 [2024-07-26 02:03:23.331980] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.331990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.332000] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.553 [2024-07-26 02:03:23.332012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.553 [2024-07-26 02:03:23.332032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.553 [2024-07-26 02:03:23.332136] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.553 [2024-07-26 02:03:23.332151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.553 [2024-07-26 02:03:23.332157] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.332164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.553 [2024-07-26 02:03:23.332181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.332191] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.332198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.553 [2024-07-26 02:03:23.332208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.553 [2024-07-26 02:03:23.332229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.553 [2024-07-26 02:03:23.332327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.553 [2024-07-26 02:03:23.332339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.553 [2024-07-26 02:03:23.332346] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.332353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.553 [2024-07-26 02:03:23.332369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.332378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.553 [2024-07-26 02:03:23.332385] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.553 [2024-07-26 02:03:23.332396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-26 02:03:23.332416] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.554 [2024-07-26 02:03:23.332512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-26 02:03:23.332527] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.554 [2024-07-26 02:03:23.332534] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.332541] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.554 [2024-07-26 02:03:23.332557] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.332566] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.332573] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.554 [2024-07-26 02:03:23.332584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-26 02:03:23.332604] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.554 [2024-07-26 02:03:23.332699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-26 02:03:23.332711] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.554 [2024-07-26 02:03:23.332718] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.332725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.554 [2024-07-26 02:03:23.332741] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.332750] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.332757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.554 [2024-07-26 02:03:23.332771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-26 02:03:23.332792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.554 [2024-07-26 02:03:23.332887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-26 02:03:23.332898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.554 [2024-07-26 02:03:23.332905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.332912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.554 [2024-07-26 02:03:23.332928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.332937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.332944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.554 [2024-07-26 02:03:23.332955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-26 02:03:23.332975] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.554 [2024-07-26 02:03:23.333071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-26 02:03:23.333084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.554 [2024-07-26 02:03:23.333091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.333098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.554 [2024-07-26 02:03:23.333114] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.333123] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.333130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.554 [2024-07-26 02:03:23.333141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-26 02:03:23.333161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.554 [2024-07-26 02:03:23.333256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-26 02:03:23.333268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.554 [2024-07-26 02:03:23.333275] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.333282] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.554 [2024-07-26 02:03:23.333297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.333307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.333313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.554 [2024-07-26 02:03:23.333324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-26 02:03:23.333344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.554 [2024-07-26 02:03:23.333444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-26 02:03:23.333459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.554 [2024-07-26 02:03:23.333466] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.333473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.554 [2024-07-26 02:03:23.333489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.333499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.333505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.554 [2024-07-26 02:03:23.333516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-26 02:03:23.333541] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.554 [2024-07-26 02:03:23.333641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-26 02:03:23.333656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.554 [2024-07-26 02:03:23.333663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.333669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.554 [2024-07-26 02:03:23.333686] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.333695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.333702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.554 [2024-07-26 02:03:23.333713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-26 02:03:23.333733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.554 [2024-07-26 02:03:23.333832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-26 02:03:23.333847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.554 [2024-07-26 02:03:23.333854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.333861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.554 [2024-07-26 02:03:23.333877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.333887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.333894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.554 [2024-07-26 02:03:23.333904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-26 02:03:23.333925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.554 [2024-07-26 02:03:23.334018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-26 02:03:23.334033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.554 [2024-07-26 02:03:23.334040] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.334047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.554 [2024-07-26 02:03:23.338072] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.338087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-26 02:03:23.338094] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cc2fe0) 00:27:41.554 [2024-07-26 02:03:23.338105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-26 02:03:23.338128] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d29d00, cid 3, qid 0 00:27:41.554 [2024-07-26 02:03:23.338262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-26 02:03:23.338277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.555 [2024-07-26 02:03:23.338284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.555 [2024-07-26 02:03:23.338291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d29d00) on tqpair=0x1cc2fe0 00:27:41.555 [2024-07-26 02:03:23.338305] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:27:41.555 0% 00:27:41.555 Data Units Read: 0 00:27:41.555 Data Units Written: 0 00:27:41.555 Host Read Commands: 0 00:27:41.555 Host Write Commands: 0 00:27:41.555 Controller Busy Time: 0 minutes 00:27:41.555 Power Cycles: 0 00:27:41.555 Power On Hours: 0 hours 00:27:41.555 Unsafe Shutdowns: 0 00:27:41.555 Unrecoverable Media Errors: 0 00:27:41.555 Lifetime Error Log Entries: 0 00:27:41.555 Warning Temperature Time: 0 minutes 00:27:41.555 Critical Temperature Time: 0 minutes 00:27:41.555 00:27:41.555 Number of Queues 00:27:41.555 ================ 00:27:41.555 Number of I/O Submission Queues: 127 00:27:41.555 Number of I/O Completion Queues: 127 00:27:41.555 00:27:41.555 Active Namespaces 00:27:41.555 ================= 00:27:41.555 Namespace ID:1 00:27:41.555 Error Recovery Timeout: Unlimited 00:27:41.555 Command Set Identifier: NVM (00h) 00:27:41.555 Deallocate: Supported 00:27:41.555 Deallocated/Unwritten Error: Not Supported 00:27:41.555 Deallocated Read Value: Unknown 00:27:41.555 Deallocate in Write Zeroes: Not Supported 00:27:41.555 Deallocated Guard Field: 0xFFFF 00:27:41.555 Flush: Supported 00:27:41.555 Reservation: Supported 00:27:41.555 Namespace Sharing Capabilities: Multiple Controllers 00:27:41.555 Size (in LBAs): 131072 (0GiB) 00:27:41.555 Capacity (in LBAs): 131072 (0GiB) 00:27:41.555 Utilization (in LBAs): 131072 (0GiB) 00:27:41.555 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:41.555 EUI64: ABCDEF0123456789 00:27:41.555 UUID: 837ba739-874a-4fa7-ad3b-32519513be70 00:27:41.555 Thin Provisioning: Not Supported 00:27:41.555 Per-NS Atomic Units: Yes 00:27:41.555 Atomic Boundary Size (Normal): 0 00:27:41.555 Atomic Boundary Size (PFail): 0 00:27:41.555 Atomic Boundary Offset: 0 00:27:41.555 Maximum Single Source Range Length: 65535 00:27:41.555 Maximum Copy Length: 65535 00:27:41.555 Maximum Source Range Count: 1 00:27:41.555 NGUID/EUI64 Never Reused: No 00:27:41.555 Namespace Write Protected: No 00:27:41.555 Number of LBA Formats: 1 00:27:41.555 Current LBA Format: LBA Format #00 00:27:41.555 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:41.555 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.555 rmmod nvme_tcp 00:27:41.555 rmmod nvme_fabrics 00:27:41.555 rmmod nvme_keyring 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2358900 ']' 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2358900 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2358900 ']' 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2358900 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2358900 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2358900' 00:27:41.555 killing process with pid 2358900 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2358900 00:27:41.555 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2358900 00:27:41.813 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.813 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:41.813 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:41.813 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.813 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:41.813 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.813 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.813 02:03:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:44.344 00:27:44.344 real 0m5.309s 00:27:44.344 user 0m4.452s 00:27:44.344 sys 0m1.811s 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:44.344 ************************************ 00:27:44.344 END TEST nvmf_identify 00:27:44.344 ************************************ 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.344 ************************************ 00:27:44.344 START TEST nvmf_perf 00:27:44.344 ************************************ 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:44.344 * Looking for test storage... 00:27:44.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:44.344 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:44.345 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:44.345 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:44.345 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:44.345 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.345 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:44.345 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:44.345 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:44.345 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.345 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.345 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.345 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:44.345 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:44.345 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:44.345 02:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:46.250 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:46.250 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:46.250 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:46.250 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.250 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:46.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:27:46.251 00:27:46.251 --- 10.0.0.2 ping statistics --- 00:27:46.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.251 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:27:46.251 00:27:46.251 --- 10.0.0.1 ping statistics --- 00:27:46.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.251 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2360961 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2360961 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2360961 ']' 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:46.251 02:03:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:46.251 [2024-07-26 02:03:27.953072] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:27:46.251 [2024-07-26 02:03:27.953171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.251 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.251 [2024-07-26 02:03:28.020429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:46.251 [2024-07-26 02:03:28.109427] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.251 [2024-07-26 02:03:28.109493] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.251 [2024-07-26 02:03:28.109506] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.251 [2024-07-26 02:03:28.109517] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.251 [2024-07-26 02:03:28.109527] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.251 [2024-07-26 02:03:28.109618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.251 [2024-07-26 02:03:28.109643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:46.251 [2024-07-26 02:03:28.109709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:46.251 [2024-07-26 02:03:28.109711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.251 02:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:46.251 02:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:27:46.251 02:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:46.251 02:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:46.251 02:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:46.251 02:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.251 02:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:46.251 02:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:49.540 02:03:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:49.540 02:03:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:49.798 02:03:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:27:49.798 02:03:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:50.055 02:03:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:50.055 02:03:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:27:50.055 02:03:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:50.055 02:03:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:50.055 02:03:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:50.313 [2024-07-26 02:03:32.137956] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.313 02:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:50.571 02:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:50.571 02:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:50.829 02:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:50.829 02:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:51.087 02:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:51.345 [2024-07-26 02:03:33.133530] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.345 02:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:51.603 02:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:27:51.603 02:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:51.603 02:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:51.603 02:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:52.978 Initializing NVMe Controllers 00:27:52.978 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:27:52.978 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:27:52.978 Initialization complete. Launching workers. 00:27:52.978 ======================================================== 00:27:52.978 Latency(us) 00:27:52.978 Device Information : IOPS MiB/s Average min max 00:27:52.979 PCIE (0000:88:00.0) NSID 1 from core 0: 86162.13 336.57 370.87 22.45 4374.12 00:27:52.979 ======================================================== 00:27:52.979 Total : 86162.13 336.57 370.87 22.45 4374.12 00:27:52.979 00:27:52.979 02:03:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:52.979 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.931 Initializing NVMe Controllers 00:27:53.931 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.931 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:53.931 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:53.931 Initialization complete. Launching workers. 00:27:53.931 ======================================================== 00:27:53.931 Latency(us) 00:27:53.931 Device Information : IOPS MiB/s Average min max 00:27:53.931 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 81.79 0.32 12712.35 162.09 45725.00 00:27:53.931 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 74.81 0.29 13366.60 5141.90 53729.93 00:27:53.931 ======================================================== 00:27:53.931 Total : 156.61 0.61 13024.89 162.09 53729.93 00:27:53.931 00:27:53.931 02:03:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:53.931 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.327 Initializing NVMe Controllers 00:27:55.328 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:55.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:55.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:55.328 Initialization complete. Launching workers. 00:27:55.328 ======================================================== 00:27:55.328 Latency(us) 00:27:55.328 Device Information : IOPS MiB/s Average min max 00:27:55.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8576.69 33.50 3748.25 737.39 7448.43 00:27:55.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3918.40 15.31 8202.90 6340.73 15980.48 00:27:55.328 ======================================================== 00:27:55.328 Total : 12495.09 48.81 5145.21 737.39 15980.48 00:27:55.328 00:27:55.328 02:03:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:55.328 02:03:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:55.328 02:03:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:55.328 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.859 Initializing NVMe Controllers 00:27:57.859 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:57.859 Controller IO queue size 128, less than required. 00:27:57.859 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:57.859 Controller IO queue size 128, less than required. 00:27:57.859 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:57.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:57.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:57.859 Initialization complete. Launching workers. 00:27:57.859 ======================================================== 00:27:57.859 Latency(us) 00:27:57.859 Device Information : IOPS MiB/s Average min max 00:27:57.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1438.24 359.56 90978.73 61079.64 128080.72 00:27:57.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 583.90 145.97 227496.77 78585.30 357855.10 00:27:57.859 ======================================================== 00:27:57.859 Total : 2022.14 505.54 130398.53 61079.64 357855.10 00:27:57.859 00:27:57.859 02:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:57.859 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.859 No valid NVMe controllers or AIO or URING devices found 00:27:57.859 Initializing NVMe Controllers 00:27:57.859 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:57.859 Controller IO queue size 128, less than required. 00:27:57.859 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:57.859 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:57.859 Controller IO queue size 128, less than required. 00:27:57.859 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:57.859 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:57.859 WARNING: Some requested NVMe devices were skipped 00:27:57.859 02:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:58.117 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.649 Initializing NVMe Controllers 00:28:00.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:00.649 Controller IO queue size 128, less than required. 00:28:00.649 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:00.649 Controller IO queue size 128, less than required. 00:28:00.649 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:00.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:00.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:00.649 Initialization complete. Launching workers. 00:28:00.649 00:28:00.649 ==================== 00:28:00.649 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:00.649 TCP transport: 00:28:00.649 polls: 11101 00:28:00.649 idle_polls: 5953 00:28:00.649 sock_completions: 5148 00:28:00.649 nvme_completions: 5925 00:28:00.649 submitted_requests: 8918 00:28:00.649 queued_requests: 1 00:28:00.649 00:28:00.649 ==================== 00:28:00.649 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:00.649 TCP transport: 00:28:00.649 polls: 14105 00:28:00.649 idle_polls: 8924 00:28:00.649 sock_completions: 5181 00:28:00.649 nvme_completions: 5817 00:28:00.649 submitted_requests: 8776 00:28:00.649 queued_requests: 1 00:28:00.649 ======================================================== 00:28:00.649 Latency(us) 00:28:00.649 Device Information : IOPS MiB/s Average min max 00:28:00.649 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1480.92 370.23 87974.03 58780.99 158372.76 00:28:00.649 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1453.92 363.48 89663.09 50359.18 130243.45 00:28:00.649 ======================================================== 00:28:00.649 Total : 2934.84 733.71 88810.79 50359.18 158372.76 00:28:00.649 00:28:00.649 02:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:00.649 02:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:00.908 02:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:00.908 02:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:00.908 02:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:04.193 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=04e655ae-cdea-428b-9c2c-99a312b2f9ea 00:28:04.193 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 04e655ae-cdea-428b-9c2c-99a312b2f9ea 00:28:04.193 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=04e655ae-cdea-428b-9c2c-99a312b2f9ea 00:28:04.193 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:04.193 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:04.193 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:04.193 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:04.451 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:04.451 { 00:28:04.451 "uuid": "04e655ae-cdea-428b-9c2c-99a312b2f9ea", 00:28:04.451 "name": "lvs_0", 00:28:04.451 "base_bdev": "Nvme0n1", 00:28:04.451 "total_data_clusters": 238234, 00:28:04.451 "free_clusters": 238234, 00:28:04.451 "block_size": 512, 00:28:04.451 "cluster_size": 4194304 00:28:04.451 } 00:28:04.451 ]' 00:28:04.451 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="04e655ae-cdea-428b-9c2c-99a312b2f9ea") .free_clusters' 00:28:04.451 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:28:04.451 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="04e655ae-cdea-428b-9c2c-99a312b2f9ea") .cluster_size' 00:28:04.451 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:04.451 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:28:04.451 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:28:04.451 952936 00:28:04.451 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:04.451 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:04.451 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 04e655ae-cdea-428b-9c2c-99a312b2f9ea lbd_0 20480 00:28:05.016 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=9cc2d65e-b964-4991-9890-6fa28383c9a3 00:28:05.016 02:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 9cc2d65e-b964-4991-9890-6fa28383c9a3 lvs_n_0 00:28:05.952 02:03:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=1c6318ca-bf95-47d4-9128-e56cef3a6df8 00:28:05.952 02:03:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 1c6318ca-bf95-47d4-9128-e56cef3a6df8 00:28:05.952 02:03:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=1c6318ca-bf95-47d4-9128-e56cef3a6df8 00:28:05.952 02:03:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:05.952 02:03:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:05.952 02:03:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:05.953 02:03:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:06.210 02:03:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:06.210 { 00:28:06.210 "uuid": "04e655ae-cdea-428b-9c2c-99a312b2f9ea", 00:28:06.210 "name": "lvs_0", 00:28:06.210 "base_bdev": "Nvme0n1", 00:28:06.210 "total_data_clusters": 238234, 00:28:06.210 "free_clusters": 233114, 00:28:06.210 "block_size": 512, 00:28:06.210 "cluster_size": 4194304 00:28:06.210 }, 00:28:06.210 { 00:28:06.210 "uuid": "1c6318ca-bf95-47d4-9128-e56cef3a6df8", 00:28:06.210 "name": "lvs_n_0", 00:28:06.210 "base_bdev": "9cc2d65e-b964-4991-9890-6fa28383c9a3", 00:28:06.210 "total_data_clusters": 5114, 00:28:06.210 "free_clusters": 5114, 00:28:06.210 "block_size": 512, 00:28:06.210 "cluster_size": 4194304 00:28:06.210 } 00:28:06.210 ]' 00:28:06.210 02:03:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="1c6318ca-bf95-47d4-9128-e56cef3a6df8") .free_clusters' 00:28:06.210 02:03:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:28:06.210 02:03:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="1c6318ca-bf95-47d4-9128-e56cef3a6df8") .cluster_size' 00:28:06.210 02:03:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:06.210 02:03:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:28:06.210 02:03:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:28:06.210 20456 00:28:06.211 02:03:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:06.211 02:03:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1c6318ca-bf95-47d4-9128-e56cef3a6df8 lbd_nest_0 20456 00:28:06.468 02:03:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=419db931-4721-4c7a-bada-287636fa2670 00:28:06.468 02:03:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:06.726 02:03:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:06.726 02:03:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 419db931-4721-4c7a-bada-287636fa2670 00:28:06.983 02:03:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:07.242 02:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:07.242 02:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:07.242 02:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:07.242 02:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:07.242 02:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:07.242 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.651 Initializing NVMe Controllers 00:28:17.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:17.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:17.651 Initialization complete. Launching workers. 00:28:17.651 ======================================================== 00:28:17.651 Latency(us) 00:28:17.651 Device Information : IOPS MiB/s Average min max 00:28:17.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 43.90 0.02 22800.51 192.79 45733.33 00:28:17.651 ======================================================== 00:28:17.651 Total : 43.90 0.02 22800.51 192.79 45733.33 00:28:17.651 00:28:17.651 02:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:17.651 02:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:17.651 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.854 Initializing NVMe Controllers 00:28:29.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:29.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:29.854 Initialization complete. Launching workers. 00:28:29.854 ======================================================== 00:28:29.854 Latency(us) 00:28:29.854 Device Information : IOPS MiB/s Average min max 00:28:29.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 76.17 9.52 13127.45 5392.05 47974.90 00:28:29.854 ======================================================== 00:28:29.854 Total : 76.17 9.52 13127.45 5392.05 47974.90 00:28:29.854 00:28:29.854 02:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:29.854 02:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:29.854 02:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.854 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.016 Initializing NVMe Controllers 00:28:38.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:38.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:38.016 Initialization complete. Launching workers. 00:28:38.016 ======================================================== 00:28:38.016 Latency(us) 00:28:38.016 Device Information : IOPS MiB/s Average min max 00:28:38.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7556.73 3.69 4244.21 297.80 47884.61 00:28:38.016 ======================================================== 00:28:38.016 Total : 7556.73 3.69 4244.21 297.80 47884.61 00:28:38.016 00:28:38.016 02:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:38.016 02:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:38.016 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.215 Initializing NVMe Controllers 00:28:50.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:50.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:50.215 Initialization complete. Launching workers. 00:28:50.215 ======================================================== 00:28:50.215 Latency(us) 00:28:50.215 Device Information : IOPS MiB/s Average min max 00:28:50.215 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3028.35 378.54 10569.98 617.70 23851.44 00:28:50.215 ======================================================== 00:28:50.215 Total : 3028.35 378.54 10569.98 617.70 23851.44 00:28:50.215 00:28:50.215 02:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:50.215 02:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:50.215 02:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:50.215 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.178 Initializing NVMe Controllers 00:29:00.178 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:00.178 Controller IO queue size 128, less than required. 00:29:00.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:00.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:00.178 Initialization complete. Launching workers. 00:29:00.178 ======================================================== 00:29:00.178 Latency(us) 00:29:00.178 Device Information : IOPS MiB/s Average min max 00:29:00.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11081.50 5.41 11555.29 1610.75 29200.14 00:29:00.178 ======================================================== 00:29:00.178 Total : 11081.50 5.41 11555.29 1610.75 29200.14 00:29:00.178 00:29:00.178 02:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:00.178 02:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:00.178 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.141 Initializing NVMe Controllers 00:29:10.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:10.141 Controller IO queue size 128, less than required. 00:29:10.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:10.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:10.141 Initialization complete. Launching workers. 00:29:10.141 ======================================================== 00:29:10.141 Latency(us) 00:29:10.141 Device Information : IOPS MiB/s Average min max 00:29:10.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1217.32 152.17 105455.56 23803.57 186962.85 00:29:10.142 ======================================================== 00:29:10.142 Total : 1217.32 152.17 105455.56 23803.57 186962.85 00:29:10.142 00:29:10.142 02:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:10.142 02:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 419db931-4721-4c7a-bada-287636fa2670 00:29:10.142 02:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:10.399 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9cc2d65e-b964-4991-9890-6fa28383c9a3 00:29:10.656 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:10.914 rmmod nvme_tcp 00:29:10.914 rmmod nvme_fabrics 00:29:10.914 rmmod nvme_keyring 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2360961 ']' 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2360961 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2360961 ']' 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2360961 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2360961 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2360961' 00:29:10.914 killing process with pid 2360961 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2360961 00:29:10.914 02:04:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2360961 00:29:12.821 02:04:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:12.821 02:04:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:12.821 02:04:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:12.821 02:04:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:12.821 02:04:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:12.821 02:04:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.821 02:04:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.821 02:04:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:14.727 00:29:14.727 real 1m30.754s 00:29:14.727 user 5m32.586s 00:29:14.727 sys 0m16.841s 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:14.727 ************************************ 00:29:14.727 END TEST nvmf_perf 00:29:14.727 ************************************ 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.727 ************************************ 00:29:14.727 START TEST nvmf_fio_host 00:29:14.727 ************************************ 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:14.727 * Looking for test storage... 00:29:14.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.727 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:14.728 02:04:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:16.681 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:16.681 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:16.681 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:16.681 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:16.681 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:16.682 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:16.682 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:16.682 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:16.682 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:16.682 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:16.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:16.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:29:16.964 00:29:16.964 --- 10.0.0.2 ping statistics --- 00:29:16.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.964 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:16.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:16.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:29:16.964 00:29:16.964 --- 10.0.0.1 ping statistics --- 00:29:16.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.964 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2372933 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2372933 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2372933 ']' 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:16.964 02:04:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.964 [2024-07-26 02:04:58.825468] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:29:16.964 [2024-07-26 02:04:58.825543] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.964 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.964 [2024-07-26 02:04:58.889880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.222 [2024-07-26 02:04:58.976457] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.223 [2024-07-26 02:04:58.976504] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.223 [2024-07-26 02:04:58.976518] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.223 [2024-07-26 02:04:58.976528] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.223 [2024-07-26 02:04:58.976538] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.223 [2024-07-26 02:04:58.976616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.223 [2024-07-26 02:04:58.976690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.223 [2024-07-26 02:04:58.976750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.223 [2024-07-26 02:04:58.976752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.223 02:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:17.223 02:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:29:17.223 02:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:17.481 [2024-07-26 02:04:59.316853] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.481 02:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:17.481 02:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:17.481 02:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.481 02:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:17.740 Malloc1 00:29:17.740 02:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:17.997 02:04:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:18.256 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:18.512 [2024-07-26 02:05:00.368260] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.512 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:18.770 02:05:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:19.028 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:19.028 fio-3.35 00:29:19.028 Starting 1 thread 00:29:19.028 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.572 00:29:21.572 test: (groupid=0, jobs=1): err= 0: pid=2373294: Fri Jul 26 02:05:03 2024 00:29:21.572 read: IOPS=9213, BW=36.0MiB/s (37.7MB/s)(72.2MiB/2006msec) 00:29:21.572 slat (nsec): min=1833, max=161546, avg=2584.17, stdev=2079.05 00:29:21.572 clat (usec): min=2494, max=12953, avg=7652.78, stdev=598.72 00:29:21.572 lat (usec): min=2514, max=12956, avg=7655.36, stdev=598.62 00:29:21.572 clat percentiles (usec): 00:29:21.572 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:29:21.572 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:29:21.572 | 70.00th=[ 7963], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:29:21.572 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[11863], 99.95th=[12256], 00:29:21.572 | 99.99th=[12780] 00:29:21.572 bw ( KiB/s): min=36072, max=37448, per=99.90%, avg=36816.00, stdev=569.22, samples=4 00:29:21.572 iops : min= 9018, max= 9362, avg=9204.00, stdev=142.30, samples=4 00:29:21.572 write: IOPS=9219, BW=36.0MiB/s (37.8MB/s)(72.2MiB/2006msec); 0 zone resets 00:29:21.572 slat (nsec): min=1974, max=153409, avg=2699.00, stdev=1774.46 00:29:21.572 clat (usec): min=1369, max=11265, avg=6187.76, stdev=492.64 00:29:21.572 lat (usec): min=1376, max=11267, avg=6190.46, stdev=492.59 00:29:21.572 clat percentiles (usec): 00:29:21.572 | 1.00th=[ 5080], 5.00th=[ 5407], 10.00th=[ 5604], 20.00th=[ 5800], 00:29:21.572 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6325], 00:29:21.572 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6718], 95.00th=[ 6915], 00:29:21.572 | 99.00th=[ 7242], 99.50th=[ 7373], 99.90th=[ 9110], 99.95th=[10159], 00:29:21.572 | 99.99th=[11076] 00:29:21.572 bw ( KiB/s): min=36616, max=37056, per=100.00%, avg=36884.00, stdev=196.01, samples=4 00:29:21.572 iops : min= 9154, max= 9264, avg=9221.00, stdev=49.00, samples=4 00:29:21.572 lat (msec) : 2=0.03%, 4=0.11%, 10=99.74%, 20=0.12% 00:29:21.572 cpu : usr=62.34%, sys=33.87%, ctx=76, majf=0, minf=6 00:29:21.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:21.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.572 issued rwts: total=18482,18495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.572 00:29:21.572 Run status group 0 (all jobs): 00:29:21.572 READ: bw=36.0MiB/s (37.7MB/s), 36.0MiB/s-36.0MiB/s (37.7MB/s-37.7MB/s), io=72.2MiB (75.7MB), run=2006-2006msec 00:29:21.572 WRITE: bw=36.0MiB/s (37.8MB/s), 36.0MiB/s-36.0MiB/s (37.8MB/s-37.8MB/s), io=72.2MiB (75.8MB), run=2006-2006msec 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:21.572 02:05:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:21.572 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:21.572 fio-3.35 00:29:21.572 Starting 1 thread 00:29:21.572 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.097 00:29:24.097 test: (groupid=0, jobs=1): err= 0: pid=2373621: Fri Jul 26 02:05:05 2024 00:29:24.097 read: IOPS=8069, BW=126MiB/s (132MB/s)(253MiB/2008msec) 00:29:24.097 slat (usec): min=2, max=133, avg= 3.70, stdev= 1.88 00:29:24.097 clat (usec): min=2293, max=17662, avg=9117.67, stdev=2141.34 00:29:24.097 lat (usec): min=2297, max=17665, avg=9121.37, stdev=2141.42 00:29:24.097 clat percentiles (usec): 00:29:24.097 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 6521], 20.00th=[ 7308], 00:29:24.097 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9634], 00:29:24.097 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11863], 95.00th=[12780], 00:29:24.098 | 99.00th=[15008], 99.50th=[16057], 99.90th=[16909], 99.95th=[17171], 00:29:24.098 | 99.99th=[17433] 00:29:24.098 bw ( KiB/s): min=54048, max=77664, per=51.95%, avg=67072.00, stdev=11877.39, samples=4 00:29:24.098 iops : min= 3378, max= 4854, avg=4192.00, stdev=742.34, samples=4 00:29:24.098 write: IOPS=4797, BW=75.0MiB/s (78.6MB/s)(137MiB/1828msec); 0 zone resets 00:29:24.098 slat (usec): min=30, max=193, avg=33.89, stdev= 5.72 00:29:24.098 clat (usec): min=6586, max=20623, avg=11758.77, stdev=2114.87 00:29:24.098 lat (usec): min=6627, max=20659, avg=11792.66, stdev=2114.95 00:29:24.098 clat percentiles (usec): 00:29:24.098 | 1.00th=[ 7635], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:29:24.098 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[12125], 00:29:24.098 | 70.00th=[12780], 80.00th=[13566], 90.00th=[14746], 95.00th=[15533], 00:29:24.098 | 99.00th=[16909], 99.50th=[17695], 99.90th=[18482], 99.95th=[19006], 00:29:24.098 | 99.99th=[20579] 00:29:24.098 bw ( KiB/s): min=56480, max=79904, per=90.71%, avg=69624.00, stdev=11661.86, samples=4 00:29:24.098 iops : min= 3530, max= 4994, avg=4351.50, stdev=728.87, samples=4 00:29:24.098 lat (msec) : 4=0.16%, 10=51.17%, 20=48.65%, 50=0.02% 00:29:24.098 cpu : usr=73.54%, sys=23.52%, ctx=27, majf=0, minf=2 00:29:24.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:29:24.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:24.098 issued rwts: total=16204,8769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:24.098 00:29:24.098 Run status group 0 (all jobs): 00:29:24.098 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=253MiB (265MB), run=2008-2008msec 00:29:24.098 WRITE: bw=75.0MiB/s (78.6MB/s), 75.0MiB/s-75.0MiB/s (78.6MB/s-78.6MB/s), io=137MiB (144MB), run=1828-1828msec 00:29:24.098 02:05:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.098 02:05:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:24.098 02:05:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:24.098 02:05:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:24.098 02:05:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:24.098 02:05:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:29:24.098 02:05:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:24.098 02:05:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:24.098 02:05:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:24.098 02:05:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:24.098 02:05:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:29:24.098 02:05:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:29:27.377 Nvme0n1 00:29:27.377 02:05:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:30.654 02:05:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=2b31be40-9dae-4cfa-a4ee-566a8c0889da 00:29:30.654 02:05:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 2b31be40-9dae-4cfa-a4ee-566a8c0889da 00:29:30.654 02:05:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=2b31be40-9dae-4cfa-a4ee-566a8c0889da 00:29:30.654 02:05:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:30.654 02:05:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:30.654 02:05:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:30.654 02:05:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:30.654 02:05:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:30.654 { 00:29:30.654 "uuid": "2b31be40-9dae-4cfa-a4ee-566a8c0889da", 00:29:30.654 "name": "lvs_0", 00:29:30.654 "base_bdev": "Nvme0n1", 00:29:30.654 "total_data_clusters": 930, 00:29:30.654 "free_clusters": 930, 00:29:30.654 "block_size": 512, 00:29:30.654 "cluster_size": 1073741824 00:29:30.654 } 00:29:30.654 ]' 00:29:30.654 02:05:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="2b31be40-9dae-4cfa-a4ee-566a8c0889da") .free_clusters' 00:29:30.654 02:05:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:29:30.654 02:05:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="2b31be40-9dae-4cfa-a4ee-566a8c0889da") .cluster_size' 00:29:30.654 02:05:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:29:30.654 02:05:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:29:30.654 02:05:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:29:30.654 952320 00:29:30.654 02:05:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:30.911 f363fd85-b248-40df-a96f-ad90dd9f4e2c 00:29:30.911 02:05:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:31.168 02:05:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:31.425 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:31.680 02:05:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:31.936 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:31.936 fio-3.35 00:29:31.936 Starting 1 thread 00:29:31.936 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.459 00:29:34.459 test: (groupid=0, jobs=1): err= 0: pid=2374902: Fri Jul 26 02:05:16 2024 00:29:34.459 read: IOPS=6080, BW=23.8MiB/s (24.9MB/s)(47.7MiB/2008msec) 00:29:34.459 slat (usec): min=2, max=144, avg= 2.69, stdev= 2.17 00:29:34.459 clat (usec): min=874, max=171214, avg=11590.33, stdev=11559.60 00:29:34.459 lat (usec): min=878, max=171251, avg=11593.02, stdev=11559.87 00:29:34.459 clat percentiles (msec): 00:29:34.459 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:29:34.459 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:29:34.459 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:29:34.459 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:29:34.459 | 99.99th=[ 171] 00:29:34.459 bw ( KiB/s): min=17096, max=26784, per=99.81%, avg=24274.00, stdev=4786.16, samples=4 00:29:34.459 iops : min= 4274, max= 6696, avg=6068.50, stdev=1196.54, samples=4 00:29:34.459 write: IOPS=6058, BW=23.7MiB/s (24.8MB/s)(47.5MiB/2008msec); 0 zone resets 00:29:34.459 slat (usec): min=2, max=101, avg= 2.80, stdev= 1.66 00:29:34.459 clat (usec): min=361, max=169271, avg=9371.29, stdev=10868.05 00:29:34.459 lat (usec): min=364, max=169277, avg=9374.09, stdev=10868.29 00:29:34.459 clat percentiles (msec): 00:29:34.459 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:29:34.459 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:29:34.459 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:29:34.459 | 99.00th=[ 11], 99.50th=[ 15], 99.90th=[ 169], 99.95th=[ 169], 00:29:34.459 | 99.99th=[ 169] 00:29:34.459 bw ( KiB/s): min=18152, max=26376, per=99.95%, avg=24220.00, stdev=4047.96, samples=4 00:29:34.459 iops : min= 4538, max= 6594, avg=6055.00, stdev=1011.99, samples=4 00:29:34.459 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:34.459 lat (msec) : 2=0.03%, 4=0.13%, 10=57.92%, 20=41.37%, 250=0.53% 00:29:34.459 cpu : usr=56.75%, sys=40.16%, ctx=121, majf=0, minf=24 00:29:34.459 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:34.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:34.459 issued rwts: total=12209,12165,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.459 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:34.459 00:29:34.459 Run status group 0 (all jobs): 00:29:34.459 READ: bw=23.8MiB/s (24.9MB/s), 23.8MiB/s-23.8MiB/s (24.9MB/s-24.9MB/s), io=47.7MiB (50.0MB), run=2008-2008msec 00:29:34.459 WRITE: bw=23.7MiB/s (24.8MB/s), 23.7MiB/s-23.7MiB/s (24.8MB/s-24.8MB/s), io=47.5MiB (49.8MB), run=2008-2008msec 00:29:34.459 02:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:34.459 02:05:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:35.831 02:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=cafc9d17-d3f7-457f-930c-a225fef9d1f9 00:29:35.831 02:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb cafc9d17-d3f7-457f-930c-a225fef9d1f9 00:29:35.831 02:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=cafc9d17-d3f7-457f-930c-a225fef9d1f9 00:29:35.831 02:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:35.831 02:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:35.831 02:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:35.831 02:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:36.089 02:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:36.089 { 00:29:36.089 "uuid": "2b31be40-9dae-4cfa-a4ee-566a8c0889da", 00:29:36.089 "name": "lvs_0", 00:29:36.089 "base_bdev": "Nvme0n1", 00:29:36.089 "total_data_clusters": 930, 00:29:36.089 "free_clusters": 0, 00:29:36.089 "block_size": 512, 00:29:36.089 "cluster_size": 1073741824 00:29:36.089 }, 00:29:36.089 { 00:29:36.089 "uuid": "cafc9d17-d3f7-457f-930c-a225fef9d1f9", 00:29:36.089 "name": "lvs_n_0", 00:29:36.089 "base_bdev": "f363fd85-b248-40df-a96f-ad90dd9f4e2c", 00:29:36.089 "total_data_clusters": 237847, 00:29:36.089 "free_clusters": 237847, 00:29:36.089 "block_size": 512, 00:29:36.089 "cluster_size": 4194304 00:29:36.089 } 00:29:36.089 ]' 00:29:36.089 02:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="cafc9d17-d3f7-457f-930c-a225fef9d1f9") .free_clusters' 00:29:36.089 02:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:29:36.089 02:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="cafc9d17-d3f7-457f-930c-a225fef9d1f9") .cluster_size' 00:29:36.089 02:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:36.089 02:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:29:36.089 02:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:29:36.089 951388 00:29:36.089 02:05:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:36.654 4c2c74a3-a9a3-4dac-a226-3a838c93652b 00:29:36.654 02:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:36.911 02:05:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:37.169 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:37.427 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:37.427 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:37.427 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:37.427 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:37.427 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:37.427 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:37.427 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:37.427 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:37.427 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:37.427 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:37.427 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:37.427 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:37.427 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:37.428 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:37.428 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:37.428 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:37.428 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:37.428 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:37.428 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:37.428 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:37.428 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:37.428 02:05:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:37.685 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:37.685 fio-3.35 00:29:37.685 Starting 1 thread 00:29:37.685 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.243 00:29:40.243 test: (groupid=0, jobs=1): err= 0: pid=2375639: Fri Jul 26 02:05:21 2024 00:29:40.243 read: IOPS=5791, BW=22.6MiB/s (23.7MB/s)(45.5MiB/2009msec) 00:29:40.243 slat (usec): min=2, max=134, avg= 2.84, stdev= 2.15 00:29:40.243 clat (usec): min=4470, max=20784, avg=12175.47, stdev=1351.04 00:29:40.243 lat (usec): min=4477, max=20787, avg=12178.31, stdev=1350.96 00:29:40.243 clat percentiles (usec): 00:29:40.243 | 1.00th=[ 9503], 5.00th=[10421], 10.00th=[10814], 20.00th=[11207], 00:29:40.243 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:29:40.243 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13566], 95.00th=[14091], 00:29:40.243 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19268], 99.95th=[19530], 00:29:40.243 | 99.99th=[20841] 00:29:40.243 bw ( KiB/s): min=20840, max=24080, per=99.78%, avg=23116.00, stdev=1527.62, samples=4 00:29:40.243 iops : min= 5210, max= 6020, avg=5779.00, stdev=381.91, samples=4 00:29:40.243 write: IOPS=5772, BW=22.5MiB/s (23.6MB/s)(45.3MiB/2009msec); 0 zone resets 00:29:40.243 slat (usec): min=2, max=120, avg= 2.95, stdev= 1.74 00:29:40.243 clat (usec): min=2148, max=17551, avg=9792.09, stdev=1116.12 00:29:40.243 lat (usec): min=2154, max=17553, avg=9795.04, stdev=1116.14 00:29:40.243 clat percentiles (usec): 00:29:40.243 | 1.00th=[ 7635], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8979], 00:29:40.243 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:29:40.243 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10945], 95.00th=[11338], 00:29:40.243 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15401], 99.95th=[16450], 00:29:40.243 | 99.99th=[17433] 00:29:40.243 bw ( KiB/s): min=21848, max=23552, per=100.00%, avg=23094.00, stdev=831.21, samples=4 00:29:40.243 iops : min= 5462, max= 5888, avg=5773.50, stdev=207.80, samples=4 00:29:40.243 lat (msec) : 4=0.05%, 10=32.79%, 20=67.15%, 50=0.02% 00:29:40.243 cpu : usr=60.06%, sys=37.05%, ctx=108, majf=0, minf=24 00:29:40.243 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:40.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:40.243 issued rwts: total=11636,11597,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:40.243 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:40.243 00:29:40.243 Run status group 0 (all jobs): 00:29:40.243 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.5MiB (47.7MB), run=2009-2009msec 00:29:40.243 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.3MiB (47.5MB), run=2009-2009msec 00:29:40.243 02:05:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:40.243 02:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:40.243 02:05:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:44.423 02:05:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:44.423 02:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:47.699 02:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:47.699 02:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:49.595 rmmod nvme_tcp 00:29:49.595 rmmod nvme_fabrics 00:29:49.595 rmmod nvme_keyring 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2372933 ']' 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2372933 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2372933 ']' 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2372933 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2372933 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2372933' 00:29:49.595 killing process with pid 2372933 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2372933 00:29:49.595 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2372933 00:29:49.853 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:49.853 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:49.853 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:49.853 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:49.853 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:49.853 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.853 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.853 02:05:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:51.755 00:29:51.755 real 0m37.053s 00:29:51.755 user 2m22.495s 00:29:51.755 sys 0m6.860s 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.755 ************************************ 00:29:51.755 END TEST nvmf_fio_host 00:29:51.755 ************************************ 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.755 ************************************ 00:29:51.755 START TEST nvmf_failover 00:29:51.755 ************************************ 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:51.755 * Looking for test storage... 00:29:51.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.755 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:29:52.013 02:05:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:53.912 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.912 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:29:53.912 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:53.912 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:53.912 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:53.912 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:53.912 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:53.912 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:29:53.912 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:53.912 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:29:53.912 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:53.913 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:53.913 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:53.913 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:53.913 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:53.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:29:53.913 00:29:53.913 --- 10.0.0.2 ping statistics --- 00:29:53.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.913 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:29:53.913 00:29:53.913 --- 10.0.0.1 ping statistics --- 00:29:53.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.913 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:53.913 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:54.173 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:54.173 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:54.173 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:54.173 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:54.173 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2378997 00:29:54.173 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:54.173 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2378997 00:29:54.173 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2378997 ']' 00:29:54.173 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.173 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:54.173 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.173 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:54.173 02:05:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:54.173 [2024-07-26 02:05:35.992659] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:29:54.173 [2024-07-26 02:05:35.992736] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.173 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.173 [2024-07-26 02:05:36.057162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:54.173 [2024-07-26 02:05:36.141805] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.173 [2024-07-26 02:05:36.141871] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.173 [2024-07-26 02:05:36.141895] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.173 [2024-07-26 02:05:36.141906] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.173 [2024-07-26 02:05:36.141916] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.173 [2024-07-26 02:05:36.142007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:54.173 [2024-07-26 02:05:36.142082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:54.173 [2024-07-26 02:05:36.142086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.432 02:05:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:54.432 02:05:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:54.432 02:05:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:54.432 02:05:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:54.432 02:05:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:54.432 02:05:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.432 02:05:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:54.690 [2024-07-26 02:05:36.492868] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.690 02:05:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:54.948 Malloc0 00:29:54.948 02:05:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:55.206 02:05:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:55.464 02:05:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:55.722 [2024-07-26 02:05:37.518793] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.722 02:05:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:55.981 [2024-07-26 02:05:37.767626] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:55.981 02:05:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:56.239 [2024-07-26 02:05:38.024525] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:56.239 02:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2379261 00:29:56.239 02:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:56.239 02:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:56.239 02:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2379261 /var/tmp/bdevperf.sock 00:29:56.239 02:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2379261 ']' 00:29:56.239 02:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:56.239 02:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:56.239 02:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:56.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:56.239 02:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:56.239 02:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:56.498 02:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:56.498 02:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:56.498 02:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:57.064 NVMe0n1 00:29:57.064 02:05:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:57.322 00:29:57.322 02:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2379414 00:29:57.322 02:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:57.322 02:05:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:58.257 02:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.516 [2024-07-26 02:05:40.511888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.511961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.511986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.511999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.516 [2024-07-26 02:05:40.512571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36840 is same with the state(5) to be set 00:29:58.775 02:05:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:02.051 02:05:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:02.051 00:30:02.051 02:05:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:02.308 [2024-07-26 02:05:44.205551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.205998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.206010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.206022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.206034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.206046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.206067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.206096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.206109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.206125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 [2024-07-26 02:05:44.206138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb375c0 is same with the state(5) to be set 00:30:02.308 02:05:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:05.613 02:05:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.613 [2024-07-26 02:05:47.489213] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.613 02:05:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:06.547 02:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:06.805 02:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2379414 00:30:13.367 0 00:30:13.367 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2379261 00:30:13.368 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2379261 ']' 00:30:13.368 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2379261 00:30:13.368 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:13.368 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:13.368 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2379261 00:30:13.368 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:13.368 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:13.368 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2379261' 00:30:13.368 killing process with pid 2379261 00:30:13.368 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2379261 00:30:13.368 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2379261 00:30:13.368 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:13.368 [2024-07-26 02:05:38.088744] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:30:13.368 [2024-07-26 02:05:38.088839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379261 ] 00:30:13.368 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.368 [2024-07-26 02:05:38.149712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.368 [2024-07-26 02:05:38.241894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.368 Running I/O for 15 seconds... 00:30:13.368 [2024-07-26 02:05:40.513249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.513970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.513984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.514001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.514017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.514030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.514055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.514094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.514111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.514126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.514141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.514155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.514170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.514183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.514198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.514211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.514226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.368 [2024-07-26 02:05:40.514240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.368 [2024-07-26 02:05:40.514255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.369 [2024-07-26 02:05:40.514268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.369 [2024-07-26 02:05:40.514296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.369 [2024-07-26 02:05:40.514326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.369 [2024-07-26 02:05:40.514355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.369 [2024-07-26 02:05:40.514399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.369 [2024-07-26 02:05:40.514431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.369 [2024-07-26 02:05:40.514459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.369 [2024-07-26 02:05:40.514487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.369 [2024-07-26 02:05:40.514515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.369 [2024-07-26 02:05:40.514543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.369 [2024-07-26 02:05:40.514571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.369 [2024-07-26 02:05:40.514598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.369 [2024-07-26 02:05:40.514626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.369 [2024-07-26 02:05:40.514654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.369 [2024-07-26 02:05:40.514683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.369 [2024-07-26 02:05:40.514711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.514739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.514770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.514799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.514827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.514855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.514882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.514909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.514937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.514965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.514979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.514992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.515007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.515020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.515034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.515074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.515106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.515122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.515137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.515150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.515165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.515183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.515198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.515212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.515226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.515240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.515254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.515268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.515283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.515297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.515311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.369 [2024-07-26 02:05:40.515325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.369 [2024-07-26 02:05:40.515340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.515975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.515990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.516003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.516018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.516031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.516066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.516082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.516098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.516113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.516128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.516142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.516157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.370 [2024-07-26 02:05:40.516171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.516205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.370 [2024-07-26 02:05:40.516223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79456 len:8 PRP1 0x0 PRP2 0x0 00:30:13.370 [2024-07-26 02:05:40.516237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.516262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.370 [2024-07-26 02:05:40.516280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.370 [2024-07-26 02:05:40.516292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79464 len:8 PRP1 0x0 PRP2 0x0 00:30:13.370 [2024-07-26 02:05:40.516305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.516318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.370 [2024-07-26 02:05:40.516330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.370 [2024-07-26 02:05:40.516341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79472 len:8 PRP1 0x0 PRP2 0x0 00:30:13.370 [2024-07-26 02:05:40.516374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.516388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.370 [2024-07-26 02:05:40.516399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.370 [2024-07-26 02:05:40.516425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79480 len:8 PRP1 0x0 PRP2 0x0 00:30:13.370 [2024-07-26 02:05:40.516438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.516451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.370 [2024-07-26 02:05:40.516462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.370 [2024-07-26 02:05:40.516474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79488 len:8 PRP1 0x0 PRP2 0x0 00:30:13.370 [2024-07-26 02:05:40.516486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.370 [2024-07-26 02:05:40.516499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.370 [2024-07-26 02:05:40.516510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.516522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79496 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.516535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.516548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.516559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.516571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79504 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.516584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.516596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.516607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.516619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79512 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.516632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.516645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.516656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.516667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79520 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.516681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.516694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.516710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.516722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79528 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.516734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.516748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.516759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.516773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79536 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.516787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.516800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.516811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.516823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79544 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.516836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.516849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.516859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.516870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79552 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.516883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.516895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.516906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.516917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79560 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.516929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.516942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.516953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.516964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79568 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.516976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.516989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.517000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.517011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79576 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.517023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.517035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.517046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.517057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79584 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.517093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.517108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.517124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.517136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79592 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.517149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.517162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.517177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.517189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79600 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.517202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.517215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.517226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.517238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79608 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.517251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.517264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.517275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.517286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79616 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.517299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.517312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.517323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.517334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79624 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.517347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.517360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.517371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.517398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79632 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.517410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.517424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.517434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.517445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79640 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.517458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.517471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.517481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.517493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79648 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.517505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.517518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.517530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.517541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79656 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.517554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.517571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.371 [2024-07-26 02:05:40.517582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.371 [2024-07-26 02:05:40.517594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79664 len:8 PRP1 0x0 PRP2 0x0 00:30:13.371 [2024-07-26 02:05:40.517606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.371 [2024-07-26 02:05:40.517620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.372 [2024-07-26 02:05:40.517630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.372 [2024-07-26 02:05:40.517641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79672 len:8 PRP1 0x0 PRP2 0x0 00:30:13.372 [2024-07-26 02:05:40.517654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:40.517667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.372 [2024-07-26 02:05:40.517683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.372 [2024-07-26 02:05:40.517695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79680 len:8 PRP1 0x0 PRP2 0x0 00:30:13.372 [2024-07-26 02:05:40.517708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:40.517721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.372 [2024-07-26 02:05:40.517732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.372 [2024-07-26 02:05:40.517743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79688 len:8 PRP1 0x0 PRP2 0x0 00:30:13.372 [2024-07-26 02:05:40.517756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:40.517769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.372 [2024-07-26 02:05:40.517780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.372 [2024-07-26 02:05:40.517791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79696 len:8 PRP1 0x0 PRP2 0x0 00:30:13.372 [2024-07-26 02:05:40.517803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:40.517816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.372 [2024-07-26 02:05:40.517827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.372 [2024-07-26 02:05:40.517839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:30:13.372 [2024-07-26 02:05:40.517851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:40.517865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.372 [2024-07-26 02:05:40.517875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.372 [2024-07-26 02:05:40.517887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79064 len:8 PRP1 0x0 PRP2 0x0 00:30:13.372 [2024-07-26 02:05:40.517900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:40.517957] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfe7150 was disconnected and freed. reset controller. 00:30:13.372 [2024-07-26 02:05:40.517975] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:13.372 [2024-07-26 02:05:40.518030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.372 [2024-07-26 02:05:40.518050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:40.518073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.372 [2024-07-26 02:05:40.518088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:40.518102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.372 [2024-07-26 02:05:40.518115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:40.518130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.372 [2024-07-26 02:05:40.518143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:40.518156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.372 [2024-07-26 02:05:40.518219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff3bd0 (9): Bad file descriptor 00:30:13.372 [2024-07-26 02:05:40.521502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.372 [2024-07-26 02:05:40.562317] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:13.372 [2024-07-26 02:05:44.206782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.372 [2024-07-26 02:05:44.206823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:44.206851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.372 [2024-07-26 02:05:44.206866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:44.206882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.372 [2024-07-26 02:05:44.206895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:44.206910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.372 [2024-07-26 02:05:44.206923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:44.206938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.372 [2024-07-26 02:05:44.206950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:44.206965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.372 [2024-07-26 02:05:44.206978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:44.206994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.372 [2024-07-26 02:05:44.207008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:44.207029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.372 [2024-07-26 02:05:44.207043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:44.207066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.372 [2024-07-26 02:05:44.207099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:44.207116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.372 [2024-07-26 02:05:44.207131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:44.207146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.372 [2024-07-26 02:05:44.207160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:44.207175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.372 [2024-07-26 02:05:44.207188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.372 [2024-07-26 02:05:44.207204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.372 [2024-07-26 02:05:44.207217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.207796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.373 [2024-07-26 02:05:44.207828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.373 [2024-07-26 02:05:44.207859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.373 [2024-07-26 02:05:44.207887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.373 [2024-07-26 02:05:44.207916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.373 [2024-07-26 02:05:44.207944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.373 [2024-07-26 02:05:44.207972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.207988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.373 [2024-07-26 02:05:44.208001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.208016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.208029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.208045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.208066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.208099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.208113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.208129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.208143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.208158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.208173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.208188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.208202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.208221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.208235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.208251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.208265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.208280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.208294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.208309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.373 [2024-07-26 02:05:44.208323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.208339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.373 [2024-07-26 02:05:44.208353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.373 [2024-07-26 02:05:44.208368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.208975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.208990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.209003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.209018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.209032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.209047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.209066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.209098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.209114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.209129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.209143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.209159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.209172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.209188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.209202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.209217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.209231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.209246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.209260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.209276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.209290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.209305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.209318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.209334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.209348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.209367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.209382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.209414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.209427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.209442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.374 [2024-07-26 02:05:44.209456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.374 [2024-07-26 02:05:44.209471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.375 [2024-07-26 02:05:44.209485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.209500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.375 [2024-07-26 02:05:44.209513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.209528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.375 [2024-07-26 02:05:44.209541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.209556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.375 [2024-07-26 02:05:44.209570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.209585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.375 [2024-07-26 02:05:44.209598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.209614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.375 [2024-07-26 02:05:44.209627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.209642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.375 [2024-07-26 02:05:44.209655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.209670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.375 [2024-07-26 02:05:44.209683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.209698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.375 [2024-07-26 02:05:44.209711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.209726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.375 [2024-07-26 02:05:44.209743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.209758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.375 [2024-07-26 02:05:44.209772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.209807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.375 [2024-07-26 02:05:44.209825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77480 len:8 PRP1 0x0 PRP2 0x0 00:30:13.375 [2024-07-26 02:05:44.209838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.209855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.375 [2024-07-26 02:05:44.209867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.375 [2024-07-26 02:05:44.209878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77488 len:8 PRP1 0x0 PRP2 0x0 00:30:13.375 [2024-07-26 02:05:44.209891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.209904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.375 [2024-07-26 02:05:44.209915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.375 [2024-07-26 02:05:44.209926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77496 len:8 PRP1 0x0 PRP2 0x0 00:30:13.375 [2024-07-26 02:05:44.209938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.209951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.375 [2024-07-26 02:05:44.209962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.375 [2024-07-26 02:05:44.209973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77504 len:8 PRP1 0x0 PRP2 0x0 00:30:13.375 [2024-07-26 02:05:44.209985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.209998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.375 [2024-07-26 02:05:44.210009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.375 [2024-07-26 02:05:44.210022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77512 len:8 PRP1 0x0 PRP2 0x0 00:30:13.375 [2024-07-26 02:05:44.210036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.210073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.375 [2024-07-26 02:05:44.210086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.375 [2024-07-26 02:05:44.210098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77520 len:8 PRP1 0x0 PRP2 0x0 00:30:13.375 [2024-07-26 02:05:44.210112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.210125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.375 [2024-07-26 02:05:44.210137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.375 [2024-07-26 02:05:44.210148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77528 len:8 PRP1 0x0 PRP2 0x0 00:30:13.375 [2024-07-26 02:05:44.210162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.210179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.375 [2024-07-26 02:05:44.210191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.375 [2024-07-26 02:05:44.210203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77536 len:8 PRP1 0x0 PRP2 0x0 00:30:13.375 [2024-07-26 02:05:44.210216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.210229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.375 [2024-07-26 02:05:44.210241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.375 [2024-07-26 02:05:44.210253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77544 len:8 PRP1 0x0 PRP2 0x0 00:30:13.375 [2024-07-26 02:05:44.210266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.210279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.375 [2024-07-26 02:05:44.210290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.375 [2024-07-26 02:05:44.210303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77552 len:8 PRP1 0x0 PRP2 0x0 00:30:13.375 [2024-07-26 02:05:44.210316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.210329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.375 [2024-07-26 02:05:44.210340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.375 [2024-07-26 02:05:44.210352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77560 len:8 PRP1 0x0 PRP2 0x0 00:30:13.375 [2024-07-26 02:05:44.210381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.210394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.375 [2024-07-26 02:05:44.210405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.375 [2024-07-26 02:05:44.210416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77568 len:8 PRP1 0x0 PRP2 0x0 00:30:13.375 [2024-07-26 02:05:44.210429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.210443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.375 [2024-07-26 02:05:44.210454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.375 [2024-07-26 02:05:44.210465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77576 len:8 PRP1 0x0 PRP2 0x0 00:30:13.375 [2024-07-26 02:05:44.210478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.210491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.375 [2024-07-26 02:05:44.210502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.375 [2024-07-26 02:05:44.210513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77584 len:8 PRP1 0x0 PRP2 0x0 00:30:13.375 [2024-07-26 02:05:44.210526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.375 [2024-07-26 02:05:44.210539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.375 [2024-07-26 02:05:44.210550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.375 [2024-07-26 02:05:44.210561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77592 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.210578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.210592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.376 [2024-07-26 02:05:44.210603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.376 [2024-07-26 02:05:44.210614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77600 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.210628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.210641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.376 [2024-07-26 02:05:44.210652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.376 [2024-07-26 02:05:44.210663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77608 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.210676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.210689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.376 [2024-07-26 02:05:44.210700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.376 [2024-07-26 02:05:44.210712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77616 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.210725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.210738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.376 [2024-07-26 02:05:44.210749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.376 [2024-07-26 02:05:44.210760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77624 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.210774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.210787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.376 [2024-07-26 02:05:44.210798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.376 [2024-07-26 02:05:44.210810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77632 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.210823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.210842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.376 [2024-07-26 02:05:44.210853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.376 [2024-07-26 02:05:44.210864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77640 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.210877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.210890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.376 [2024-07-26 02:05:44.210901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.376 [2024-07-26 02:05:44.210912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77648 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.210925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.210938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.376 [2024-07-26 02:05:44.210949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.376 [2024-07-26 02:05:44.210966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77656 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.210980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.210993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.376 [2024-07-26 02:05:44.211004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.376 [2024-07-26 02:05:44.211015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77664 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.211028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.211041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.376 [2024-07-26 02:05:44.211052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.376 [2024-07-26 02:05:44.211083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77672 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.211098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.211112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.376 [2024-07-26 02:05:44.211129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.376 [2024-07-26 02:05:44.211141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77680 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.211154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.211167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.376 [2024-07-26 02:05:44.211179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.376 [2024-07-26 02:05:44.211190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77688 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.211203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.211216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.376 [2024-07-26 02:05:44.211228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.376 [2024-07-26 02:05:44.211239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77696 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.211252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.211271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.376 [2024-07-26 02:05:44.211282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.376 [2024-07-26 02:05:44.211294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77704 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.211307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.211320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.376 [2024-07-26 02:05:44.211331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.376 [2024-07-26 02:05:44.211342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77712 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.211355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.211386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.376 [2024-07-26 02:05:44.211398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.376 [2024-07-26 02:05:44.211409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77720 len:8 PRP1 0x0 PRP2 0x0 00:30:13.376 [2024-07-26 02:05:44.211421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.211478] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10179f0 was disconnected and freed. reset controller. 00:30:13.376 [2024-07-26 02:05:44.211496] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:13.376 [2024-07-26 02:05:44.211543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.376 [2024-07-26 02:05:44.211562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.211578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.376 [2024-07-26 02:05:44.211591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.211605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.376 [2024-07-26 02:05:44.211618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.211632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.376 [2024-07-26 02:05:44.211647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:44.211660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.376 [2024-07-26 02:05:44.211724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff3bd0 (9): Bad file descriptor 00:30:13.376 [2024-07-26 02:05:44.214972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.376 [2024-07-26 02:05:44.378850] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:13.376 [2024-07-26 02:05:48.756655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.376 [2024-07-26 02:05:48.756717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:48.756754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.376 [2024-07-26 02:05:48.756769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:48.756784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.376 [2024-07-26 02:05:48.756798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:48.756813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.376 [2024-07-26 02:05:48.756827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.376 [2024-07-26 02:05:48.756841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.756855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.756880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.756894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.756909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.756923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.756938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.756952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.756967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.756981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.756995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.377 [2024-07-26 02:05:48.757810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.377 [2024-07-26 02:05:48.757823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.757838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.757851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.757865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.757878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.757893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.757906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.757920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.757933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.757948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.757960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.757975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.757992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.378 [2024-07-26 02:05:48.758716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.378 [2024-07-26 02:05:48.758744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.378 [2024-07-26 02:05:48.758771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.378 [2024-07-26 02:05:48.758802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.378 [2024-07-26 02:05:48.758830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.378 [2024-07-26 02:05:48.758858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.378 [2024-07-26 02:05:48.758885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.378 [2024-07-26 02:05:48.758915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.378 [2024-07-26 02:05:48.758930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.758943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.758958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.758972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.758986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.758999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.759972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.759988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.760001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.760016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.760030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.760045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.760081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.379 [2024-07-26 02:05:48.760101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.379 [2024-07-26 02:05:48.760116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.380 [2024-07-26 02:05:48.760146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.380 [2024-07-26 02:05:48.760176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.380 [2024-07-26 02:05:48.760229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31200 len:8 PRP1 0x0 PRP2 0x0 00:30:13.380 [2024-07-26 02:05:48.760243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.380 [2024-07-26 02:05:48.760275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.380 [2024-07-26 02:05:48.760286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31208 len:8 PRP1 0x0 PRP2 0x0 00:30:13.380 [2024-07-26 02:05:48.760300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.380 [2024-07-26 02:05:48.760324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.380 [2024-07-26 02:05:48.760336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31216 len:8 PRP1 0x0 PRP2 0x0 00:30:13.380 [2024-07-26 02:05:48.760356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.380 [2024-07-26 02:05:48.760397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.380 [2024-07-26 02:05:48.760408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31224 len:8 PRP1 0x0 PRP2 0x0 00:30:13.380 [2024-07-26 02:05:48.760420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.380 [2024-07-26 02:05:48.760444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.380 [2024-07-26 02:05:48.760455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31232 len:8 PRP1 0x0 PRP2 0x0 00:30:13.380 [2024-07-26 02:05:48.760467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.380 [2024-07-26 02:05:48.760491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.380 [2024-07-26 02:05:48.760502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31240 len:8 PRP1 0x0 PRP2 0x0 00:30:13.380 [2024-07-26 02:05:48.760514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.380 [2024-07-26 02:05:48.760538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.380 [2024-07-26 02:05:48.760549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31248 len:8 PRP1 0x0 PRP2 0x0 00:30:13.380 [2024-07-26 02:05:48.760561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.380 [2024-07-26 02:05:48.760585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.380 [2024-07-26 02:05:48.760596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31256 len:8 PRP1 0x0 PRP2 0x0 00:30:13.380 [2024-07-26 02:05:48.760609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.380 [2024-07-26 02:05:48.760632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.380 [2024-07-26 02:05:48.760643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31264 len:8 PRP1 0x0 PRP2 0x0 00:30:13.380 [2024-07-26 02:05:48.760656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.380 [2024-07-26 02:05:48.760679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.380 [2024-07-26 02:05:48.760690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31272 len:8 PRP1 0x0 PRP2 0x0 00:30:13.380 [2024-07-26 02:05:48.760703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.380 [2024-07-26 02:05:48.760727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.380 [2024-07-26 02:05:48.760742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31280 len:8 PRP1 0x0 PRP2 0x0 00:30:13.380 [2024-07-26 02:05:48.760755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.380 [2024-07-26 02:05:48.760779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.380 [2024-07-26 02:05:48.760790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31288 len:8 PRP1 0x0 PRP2 0x0 00:30:13.380 [2024-07-26 02:05:48.760802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.380 [2024-07-26 02:05:48.760826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.380 [2024-07-26 02:05:48.760837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30800 len:8 PRP1 0x0 PRP2 0x0 00:30:13.380 [2024-07-26 02:05:48.760850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.380 [2024-07-26 02:05:48.760874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.380 [2024-07-26 02:05:48.760885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30808 len:8 PRP1 0x0 PRP2 0x0 00:30:13.380 [2024-07-26 02:05:48.760897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.760956] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10176b0 was disconnected and freed. reset controller. 00:30:13.380 [2024-07-26 02:05:48.760974] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:13.380 [2024-07-26 02:05:48.761022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.380 [2024-07-26 02:05:48.761041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.761057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.380 [2024-07-26 02:05:48.761079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.761094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.380 [2024-07-26 02:05:48.761107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.761121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.380 [2024-07-26 02:05:48.761135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.380 [2024-07-26 02:05:48.761148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.380 [2024-07-26 02:05:48.761187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff3bd0 (9): Bad file descriptor 00:30:13.380 [2024-07-26 02:05:48.764464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.380 [2024-07-26 02:05:48.795032] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:13.380 00:30:13.380 Latency(us) 00:30:13.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.380 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:13.380 Verification LBA range: start 0x0 length 0x4000 00:30:13.380 NVMe0n1 : 15.01 8417.96 32.88 606.38 0.00 14156.67 813.13 15922.82 00:30:13.380 =================================================================================================================== 00:30:13.380 Total : 8417.96 32.88 606.38 0.00 14156.67 813.13 15922.82 00:30:13.380 Received shutdown signal, test time was about 15.000000 seconds 00:30:13.380 00:30:13.380 Latency(us) 00:30:13.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.380 =================================================================================================================== 00:30:13.380 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:13.380 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:13.380 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:13.380 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:13.380 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2381143 00:30:13.380 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:13.380 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2381143 /var/tmp/bdevperf.sock 00:30:13.380 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2381143 ']' 00:30:13.381 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:13.381 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:13.381 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:13.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:13.381 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:13.381 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:13.381 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:13.381 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:30:13.381 02:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:13.381 [2024-07-26 02:05:55.184018] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:13.381 02:05:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:13.638 [2024-07-26 02:05:55.420674] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:13.638 02:05:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:13.895 NVMe0n1 00:30:13.895 02:05:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:14.152 00:30:14.152 02:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:14.716 00:30:14.716 02:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:14.716 02:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:14.974 02:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:14.974 02:05:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:18.251 02:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:18.251 02:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:18.251 02:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2381835 00:30:18.251 02:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:18.251 02:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2381835 00:30:19.620 0 00:30:19.620 02:06:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:19.620 [2024-07-26 02:05:54.701279] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:30:19.620 [2024-07-26 02:05:54.701388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381143 ] 00:30:19.620 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.620 [2024-07-26 02:05:54.762267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.620 [2024-07-26 02:05:54.844925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.620 [2024-07-26 02:05:56.965354] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:19.620 [2024-07-26 02:05:56.965439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.620 [2024-07-26 02:05:56.965463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.620 [2024-07-26 02:05:56.965480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.621 [2024-07-26 02:05:56.965509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.621 [2024-07-26 02:05:56.965524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.621 [2024-07-26 02:05:56.965538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.621 [2024-07-26 02:05:56.965553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.621 [2024-07-26 02:05:56.965566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.621 [2024-07-26 02:05:56.965581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.621 [2024-07-26 02:05:56.965625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.621 [2024-07-26 02:05:56.965657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd6bd0 (9): Bad file descriptor 00:30:19.621 [2024-07-26 02:05:56.973112] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:19.621 Running I/O for 1 seconds... 00:30:19.621 00:30:19.621 Latency(us) 00:30:19.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.621 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:19.621 Verification LBA range: start 0x0 length 0x4000 00:30:19.621 NVMe0n1 : 1.01 7809.17 30.50 0.00 0.00 16304.59 2633.58 18447.17 00:30:19.621 =================================================================================================================== 00:30:19.621 Total : 7809.17 30.50 0.00 0.00 16304.59 2633.58 18447.17 00:30:19.621 02:06:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:19.621 02:06:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:19.621 02:06:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:20.185 02:06:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:20.185 02:06:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:20.185 02:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:20.442 02:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:23.719 02:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:23.719 02:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:23.719 02:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2381143 00:30:23.719 02:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2381143 ']' 00:30:23.719 02:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2381143 00:30:23.719 02:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:23.719 02:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:23.719 02:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2381143 00:30:23.719 02:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:23.719 02:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:23.719 02:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2381143' 00:30:23.719 killing process with pid 2381143 00:30:23.719 02:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2381143 00:30:23.719 02:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2381143 00:30:23.978 02:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:23.978 02:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:24.235 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:24.235 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:24.235 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:24.235 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:24.235 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:24.235 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:24.235 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:24.235 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:24.235 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:24.235 rmmod nvme_tcp 00:30:24.235 rmmod nvme_fabrics 00:30:24.235 rmmod nvme_keyring 00:30:24.492 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:24.492 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:24.492 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:24.492 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2378997 ']' 00:30:24.492 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2378997 00:30:24.492 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2378997 ']' 00:30:24.492 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2378997 00:30:24.492 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:30:24.492 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:24.492 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2378997 00:30:24.492 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:24.492 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:24.492 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2378997' 00:30:24.492 killing process with pid 2378997 00:30:24.492 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2378997 00:30:24.492 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2378997 00:30:24.749 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:24.749 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:24.749 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:24.749 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:24.749 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:24.749 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.749 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.749 02:06:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.647 02:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:26.647 00:30:26.647 real 0m34.854s 00:30:26.647 user 2m2.585s 00:30:26.647 sys 0m5.945s 00:30:26.647 02:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:26.647 02:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:26.647 ************************************ 00:30:26.647 END TEST nvmf_failover 00:30:26.647 ************************************ 00:30:26.647 02:06:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:26.647 02:06:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:26.647 02:06:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.648 ************************************ 00:30:26.648 START TEST nvmf_host_discovery 00:30:26.648 ************************************ 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:26.648 * Looking for test storage... 00:30:26.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.648 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.906 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.906 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.906 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.906 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.906 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.906 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:26.906 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.906 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:26.906 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:26.906 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:26.907 02:06:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:28.809 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:28.810 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:28.810 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:28.810 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:28.810 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:28.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:30:28.810 00:30:28.810 --- 10.0.0.2 ping statistics --- 00:30:28.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.810 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:30:28.810 00:30:28.810 --- 10.0.0.1 ping statistics --- 00:30:28.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.810 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2385015 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2385015 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2385015 ']' 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:28.810 02:06:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:28.810 [2024-07-26 02:06:10.734831] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:30:28.810 [2024-07-26 02:06:10.734902] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.810 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.810 [2024-07-26 02:06:10.805591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.068 [2024-07-26 02:06:10.900887] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:29.068 [2024-07-26 02:06:10.900963] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:29.068 [2024-07-26 02:06:10.900991] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:29.068 [2024-07-26 02:06:10.901005] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:29.068 [2024-07-26 02:06:10.901017] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:29.068 [2024-07-26 02:06:10.901065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.068 [2024-07-26 02:06:11.035158] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.068 [2024-07-26 02:06:11.043385] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.068 null0 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.068 null1 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2385160 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2385160 /tmp/host.sock 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2385160 ']' 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:29.068 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:29.068 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.328 [2024-07-26 02:06:11.115731] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:30:29.328 [2024-07-26 02:06:11.115812] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2385160 ] 00:30:29.328 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.328 [2024-07-26 02:06:11.177482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.328 [2024-07-26 02:06:11.267968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:29.617 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.875 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:29.875 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:29.875 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:29.875 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:29.875 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.875 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:29.875 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.875 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.876 [2024-07-26 02:06:11.673014] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:30:29.876 02:06:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:30:30.442 [2024-07-26 02:06:12.412661] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:30.442 [2024-07-26 02:06:12.412699] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:30.442 [2024-07-26 02:06:12.412725] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:30.700 [2024-07-26 02:06:12.498986] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:30.700 [2024-07-26 02:06:12.564642] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:30.700 [2024-07-26 02:06:12.564664] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:30.958 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.216 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:30:31.216 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:31.216 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:31.217 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:31.217 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:31.217 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:31.217 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:31.217 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:31.217 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:31.217 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:31.217 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:31.217 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:31.217 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.217 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.217 02:06:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:31.217 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:31.477 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.478 [2024-07-26 02:06:13.357866] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:31.478 [2024-07-26 02:06:13.358257] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:31.478 [2024-07-26 02:06:13.358291] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:31.478 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.479 [2024-07-26 02:06:13.444996] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:31.479 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:31.480 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:31.480 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:31.480 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:31.480 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:31.480 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.480 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.480 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:31.480 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:31.480 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.739 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:31.739 02:06:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:30:31.739 [2024-07-26 02:06:13.547731] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:31.739 [2024-07-26 02:06:13.547758] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:31.739 [2024-07-26 02:06:13.547770] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.668 [2024-07-26 02:06:14.586183] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:32.668 [2024-07-26 02:06:14.586215] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:32.668 [2024-07-26 02:06:14.586275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.668 [2024-07-26 02:06:14.586304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.668 [2024-07-26 02:06:14.586329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.668 [2024-07-26 02:06:14.586344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.668 [2024-07-26 02:06:14.586365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.668 [2024-07-26 02:06:14.586401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.668 [2024-07-26 02:06:14.586417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.668 [2024-07-26 02:06:14.586432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.668 [2024-07-26 02:06:14.586447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfee660 is same with the state(5) to be set 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:32.668 [2024-07-26 02:06:14.596278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfee660 (9): Bad file descriptor 00:30:32.668 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.668 [2024-07-26 02:06:14.606325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:32.668 [2024-07-26 02:06:14.606680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.668 [2024-07-26 02:06:14.606711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfee660 with addr=10.0.0.2, port=4420 00:30:32.668 [2024-07-26 02:06:14.606728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfee660 is same with the state(5) to be set 00:30:32.668 [2024-07-26 02:06:14.606752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfee660 (9): Bad file descriptor 00:30:32.668 [2024-07-26 02:06:14.606774] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:32.668 [2024-07-26 02:06:14.606789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:32.668 [2024-07-26 02:06:14.606805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:32.668 [2024-07-26 02:06:14.606839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:32.668 [2024-07-26 02:06:14.616440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:32.668 [2024-07-26 02:06:14.616719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.668 [2024-07-26 02:06:14.616747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfee660 with addr=10.0.0.2, port=4420 00:30:32.668 [2024-07-26 02:06:14.616769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfee660 is same with the state(5) to be set 00:30:32.668 [2024-07-26 02:06:14.616793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfee660 (9): Bad file descriptor 00:30:32.669 [2024-07-26 02:06:14.616814] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:32.669 [2024-07-26 02:06:14.616828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:32.669 [2024-07-26 02:06:14.616843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:32.669 [2024-07-26 02:06:14.616862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:32.669 [2024-07-26 02:06:14.626528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:32.669 [2024-07-26 02:06:14.626697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.669 [2024-07-26 02:06:14.626728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfee660 with addr=10.0.0.2, port=4420 00:30:32.669 [2024-07-26 02:06:14.626746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfee660 is same with the state(5) to be set 00:30:32.669 [2024-07-26 02:06:14.626768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfee660 (9): Bad file descriptor 00:30:32.669 [2024-07-26 02:06:14.626789] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:32.669 [2024-07-26 02:06:14.626803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:32.669 [2024-07-26 02:06:14.626817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:32.669 [2024-07-26 02:06:14.626835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:32.669 [2024-07-26 02:06:14.636600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:32.669 [2024-07-26 02:06:14.636875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.669 [2024-07-26 02:06:14.636905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfee660 with addr=10.0.0.2, port=4420 00:30:32.669 [2024-07-26 02:06:14.636922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfee660 is same with the state(5) to be set 00:30:32.669 [2024-07-26 02:06:14.636962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfee660 (9): Bad file descriptor 00:30:32.669 [2024-07-26 02:06:14.636986] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:32.669 [2024-07-26 02:06:14.637000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:32.669 [2024-07-26 02:06:14.637014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:32.669 [2024-07-26 02:06:14.637033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:32.669 [2024-07-26 02:06:14.646675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:32.669 [2024-07-26 02:06:14.646912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.669 [2024-07-26 02:06:14.646941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfee660 with addr=10.0.0.2, port=4420 00:30:32.669 [2024-07-26 02:06:14.646957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfee660 is same with the state(5) to be set 00:30:32.669 [2024-07-26 02:06:14.646980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfee660 (9): Bad file descriptor 00:30:32.669 [2024-07-26 02:06:14.647012] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:32.669 [2024-07-26 02:06:14.647030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:32.669 [2024-07-26 02:06:14.647055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:32.669 [2024-07-26 02:06:14.647086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:32.669 [2024-07-26 02:06:14.656750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:32.669 [2024-07-26 02:06:14.656940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.669 [2024-07-26 02:06:14.656968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfee660 with addr=10.0.0.2, port=4420 00:30:32.669 [2024-07-26 02:06:14.656985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfee660 is same with the state(5) to be set 00:30:32.669 [2024-07-26 02:06:14.657008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfee660 (9): Bad file descriptor 00:30:32.669 [2024-07-26 02:06:14.657028] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:32.669 [2024-07-26 02:06:14.657053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:32.669 [2024-07-26 02:06:14.657076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:32.669 [2024-07-26 02:06:14.657097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.669 [2024-07-26 02:06:14.666822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:32.669 [2024-07-26 02:06:14.667031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.669 [2024-07-26 02:06:14.667069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfee660 with addr=10.0.0.2, port=4420 00:30:32.669 [2024-07-26 02:06:14.667088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfee660 is same with the state(5) to be set 00:30:32.669 [2024-07-26 02:06:14.667110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfee660 (9): Bad file descriptor 00:30:32.669 [2024-07-26 02:06:14.667144] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:32.669 [2024-07-26 02:06:14.667167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:32.669 [2024-07-26 02:06:14.667181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:32.669 [2024-07-26 02:06:14.667201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:32.669 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:32.669 [2024-07-26 02:06:14.676893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:32.669 [2024-07-26 02:06:14.677147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.669 [2024-07-26 02:06:14.677177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfee660 with addr=10.0.0.2, port=4420 00:30:32.669 [2024-07-26 02:06:14.677195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfee660 is same with the state(5) to be set 00:30:32.669 [2024-07-26 02:06:14.677218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfee660 (9): Bad file descriptor 00:30:32.669 [2024-07-26 02:06:14.677239] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:32.669 [2024-07-26 02:06:14.677254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:32.669 [2024-07-26 02:06:14.677267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:32.669 [2024-07-26 02:06:14.677286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:32.926 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.926 [2024-07-26 02:06:14.686969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:32.926 [2024-07-26 02:06:14.687154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.926 [2024-07-26 02:06:14.687183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfee660 with addr=10.0.0.2, port=4420 00:30:32.926 [2024-07-26 02:06:14.687200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfee660 is same with the state(5) to be set 00:30:32.926 [2024-07-26 02:06:14.687223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfee660 (9): Bad file descriptor 00:30:32.926 [2024-07-26 02:06:14.687262] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:32.926 [2024-07-26 02:06:14.687280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:32.926 [2024-07-26 02:06:14.687295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:32.926 [2024-07-26 02:06:14.687314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:32.926 [2024-07-26 02:06:14.697053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:32.926 [2024-07-26 02:06:14.697234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.926 [2024-07-26 02:06:14.697262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfee660 with addr=10.0.0.2, port=4420 00:30:32.926 [2024-07-26 02:06:14.697278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfee660 is same with the state(5) to be set 00:30:32.926 [2024-07-26 02:06:14.697300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfee660 (9): Bad file descriptor 00:30:32.926 [2024-07-26 02:06:14.697321] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:32.926 [2024-07-26 02:06:14.697336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:32.926 [2024-07-26 02:06:14.697357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:32.926 [2024-07-26 02:06:14.697376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:32.926 [2024-07-26 02:06:14.707131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:32.926 [2024-07-26 02:06:14.707350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.926 [2024-07-26 02:06:14.707381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfee660 with addr=10.0.0.2, port=4420 00:30:32.926 [2024-07-26 02:06:14.707397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfee660 is same with the state(5) to be set 00:30:32.926 [2024-07-26 02:06:14.707419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfee660 (9): Bad file descriptor 00:30:32.926 [2024-07-26 02:06:14.707452] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:32.926 [2024-07-26 02:06:14.707470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:32.926 [2024-07-26 02:06:14.707484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:32.926 [2024-07-26 02:06:14.707503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:32.926 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:30:32.926 02:06:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:30:32.926 [2024-07-26 02:06:14.712933] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:32.926 [2024-07-26 02:06:14.712965] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:33.858 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.115 02:06:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.047 [2024-07-26 02:06:17.004771] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:35.047 [2024-07-26 02:06:17.004811] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:35.047 [2024-07-26 02:06:17.004835] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:35.305 [2024-07-26 02:06:17.092082] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:35.305 [2024-07-26 02:06:17.160146] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:35.305 [2024-07-26 02:06:17.160187] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.305 request: 00:30:35.305 { 00:30:35.305 "name": "nvme", 00:30:35.305 "trtype": "tcp", 00:30:35.305 "traddr": "10.0.0.2", 00:30:35.305 "adrfam": "ipv4", 00:30:35.305 "trsvcid": "8009", 00:30:35.305 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:35.305 "wait_for_attach": true, 00:30:35.305 "method": "bdev_nvme_start_discovery", 00:30:35.305 "req_id": 1 00:30:35.305 } 00:30:35.305 Got JSON-RPC error response 00:30:35.305 response: 00:30:35.305 { 00:30:35.305 "code": -17, 00:30:35.305 "message": "File exists" 00:30:35.305 } 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.305 request: 00:30:35.305 { 00:30:35.305 "name": "nvme_second", 00:30:35.305 "trtype": "tcp", 00:30:35.305 "traddr": "10.0.0.2", 00:30:35.305 "adrfam": "ipv4", 00:30:35.305 "trsvcid": "8009", 00:30:35.305 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:35.305 "wait_for_attach": true, 00:30:35.305 "method": "bdev_nvme_start_discovery", 00:30:35.305 "req_id": 1 00:30:35.305 } 00:30:35.305 Got JSON-RPC error response 00:30:35.305 response: 00:30:35.305 { 00:30:35.305 "code": -17, 00:30:35.305 "message": "File exists" 00:30:35.305 } 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:35.305 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.563 02:06:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:36.495 [2024-07-26 02:06:18.376052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.495 [2024-07-26 02:06:18.376109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfea3f0 with addr=10.0.0.2, port=8010 00:30:36.495 [2024-07-26 02:06:18.376141] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:36.495 [2024-07-26 02:06:18.376157] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:36.495 [2024-07-26 02:06:18.376170] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:37.428 [2024-07-26 02:06:19.378553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.428 [2024-07-26 02:06:19.378629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfea3f0 with addr=10.0.0.2, port=8010 00:30:37.428 [2024-07-26 02:06:19.378664] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:37.428 [2024-07-26 02:06:19.378696] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:37.428 [2024-07-26 02:06:19.378709] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:38.804 [2024-07-26 02:06:20.380651] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:38.804 request: 00:30:38.804 { 00:30:38.804 "name": "nvme_second", 00:30:38.804 "trtype": "tcp", 00:30:38.804 "traddr": "10.0.0.2", 00:30:38.804 "adrfam": "ipv4", 00:30:38.804 "trsvcid": "8010", 00:30:38.804 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:38.804 "wait_for_attach": false, 00:30:38.804 "attach_timeout_ms": 3000, 00:30:38.804 "method": "bdev_nvme_start_discovery", 00:30:38.804 "req_id": 1 00:30:38.804 } 00:30:38.804 Got JSON-RPC error response 00:30:38.804 response: 00:30:38.804 { 00:30:38.804 "code": -110, 00:30:38.804 "message": "Connection timed out" 00:30:38.804 } 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2385160 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:38.804 rmmod nvme_tcp 00:30:38.804 rmmod nvme_fabrics 00:30:38.804 rmmod nvme_keyring 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2385015 ']' 00:30:38.804 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2385015 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2385015 ']' 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2385015 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2385015 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2385015' 00:30:38.805 killing process with pid 2385015 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2385015 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2385015 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.805 02:06:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:41.339 00:30:41.339 real 0m14.174s 00:30:41.339 user 0m21.122s 00:30:41.339 sys 0m2.935s 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:41.339 ************************************ 00:30:41.339 END TEST nvmf_host_discovery 00:30:41.339 ************************************ 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:41.339 ************************************ 00:30:41.339 START TEST nvmf_host_multipath_status 00:30:41.339 ************************************ 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:41.339 * Looking for test storage... 00:30:41.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:41.339 02:06:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:43.237 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:43.237 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:43.237 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:43.237 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:43.237 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:43.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:30:43.238 00:30:43.238 --- 10.0.0.2 ping statistics --- 00:30:43.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.238 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:43.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:30:43.238 00:30:43.238 --- 10.0.0.1 ping statistics --- 00:30:43.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.238 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2388325 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2388325 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2388325 ']' 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:43.238 02:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:43.238 [2024-07-26 02:06:25.023150] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:30:43.238 [2024-07-26 02:06:25.023236] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.238 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.238 [2024-07-26 02:06:25.086250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:43.238 [2024-07-26 02:06:25.171005] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.238 [2024-07-26 02:06:25.171076] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.238 [2024-07-26 02:06:25.171109] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.238 [2024-07-26 02:06:25.171120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.238 [2024-07-26 02:06:25.171130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.238 [2024-07-26 02:06:25.171183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.238 [2024-07-26 02:06:25.171187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.496 02:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:43.496 02:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:43.496 02:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:43.496 02:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:43.496 02:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:43.496 02:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.496 02:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2388325 00:30:43.496 02:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:43.753 [2024-07-26 02:06:25.586521] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.753 02:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:44.011 Malloc0 00:30:44.011 02:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:44.267 02:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:44.524 02:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.781 [2024-07-26 02:06:26.625844] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.781 02:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:45.039 [2024-07-26 02:06:26.882636] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:45.039 02:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2388605 00:30:45.039 02:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:45.039 02:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:45.039 02:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2388605 /var/tmp/bdevperf.sock 00:30:45.039 02:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2388605 ']' 00:30:45.039 02:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:45.039 02:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:45.039 02:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:45.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:45.039 02:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:45.039 02:06:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:45.297 02:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:45.297 02:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:45.297 02:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:45.554 02:06:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:46.118 Nvme0n1 00:30:46.118 02:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:46.375 Nvme0n1 00:30:46.375 02:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:46.375 02:06:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:48.901 02:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:48.901 02:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:48.901 02:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:49.159 02:06:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:50.122 02:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:50.122 02:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:50.122 02:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.122 02:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:50.380 02:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.380 02:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:50.380 02:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.380 02:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:50.638 02:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:50.638 02:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:50.638 02:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.638 02:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:50.897 02:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.897 02:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:50.897 02:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.897 02:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:51.155 02:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:51.155 02:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:51.155 02:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:51.155 02:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:51.413 02:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:51.413 02:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:51.413 02:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:51.413 02:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:51.671 02:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:51.671 02:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:51.671 02:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:51.928 02:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:52.185 02:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:53.116 02:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:53.116 02:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:53.116 02:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.116 02:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:53.373 02:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:53.373 02:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:53.373 02:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.373 02:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:53.630 02:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.630 02:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:53.630 02:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.630 02:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:53.887 02:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.887 02:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:53.887 02:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.887 02:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:54.144 02:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:54.144 02:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:54.144 02:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:54.144 02:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:54.401 02:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:54.401 02:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:54.401 02:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:54.401 02:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:54.658 02:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:54.658 02:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:54.658 02:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:54.916 02:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:55.174 02:06:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:56.106 02:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:56.106 02:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:56.106 02:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.106 02:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:56.364 02:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.364 02:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:56.364 02:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.364 02:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:56.622 02:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:56.622 02:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:56.622 02:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.622 02:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:56.879 02:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.879 02:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:56.879 02:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.879 02:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:57.138 02:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.138 02:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:57.138 02:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.138 02:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:57.396 02:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.396 02:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:57.396 02:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.396 02:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:57.654 02:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.654 02:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:57.654 02:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:57.912 02:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:58.169 02:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:59.101 02:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:59.101 02:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:59.101 02:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.101 02:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:59.359 02:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.359 02:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:59.359 02:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.359 02:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:59.617 02:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:59.617 02:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:59.617 02:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.617 02:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:59.874 02:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.874 02:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:59.874 02:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.874 02:06:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:00.131 02:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.131 02:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:00.131 02:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.131 02:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:00.389 02:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.389 02:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:00.389 02:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.389 02:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:00.647 02:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:00.647 02:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:00.647 02:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:00.905 02:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:01.163 02:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:02.532 02:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:02.532 02:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:02.532 02:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:02.532 02:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.532 02:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:02.532 02:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:02.532 02:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.532 02:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:02.789 02:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:02.789 02:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:02.789 02:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.789 02:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:03.047 02:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:03.047 02:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:03.047 02:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.047 02:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:03.304 02:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:03.305 02:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:03.305 02:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.305 02:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:03.563 02:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:03.563 02:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:03.563 02:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.563 02:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:03.821 02:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:03.821 02:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:03.821 02:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:04.078 02:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:04.078 02:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:05.451 02:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:05.451 02:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:05.451 02:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.451 02:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:05.451 02:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:05.451 02:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:05.451 02:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.451 02:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:05.709 02:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.709 02:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:05.709 02:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.709 02:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:05.967 02:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.967 02:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:05.967 02:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.967 02:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:06.225 02:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.225 02:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:06.225 02:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.225 02:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:06.482 02:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:06.482 02:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:06.482 02:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.482 02:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:06.743 02:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.743 02:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:07.037 02:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:07.037 02:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:07.295 02:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:07.553 02:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:08.487 02:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:08.487 02:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:08.487 02:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.487 02:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:08.745 02:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.745 02:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:08.745 02:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.745 02:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:09.003 02:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.003 02:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:09.003 02:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.003 02:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:09.261 02:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.261 02:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:09.261 02:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.261 02:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:09.519 02:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.519 02:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:09.519 02:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.519 02:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:09.777 02:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.777 02:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:09.777 02:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.777 02:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:10.035 02:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:10.035 02:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:10.035 02:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:10.293 02:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:10.551 02:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:11.495 02:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:11.495 02:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:11.495 02:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.495 02:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:11.756 02:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:11.756 02:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:11.756 02:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.756 02:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:12.013 02:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.013 02:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:12.013 02:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.013 02:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:12.270 02:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.270 02:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:12.270 02:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.270 02:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:12.528 02:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.528 02:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:12.528 02:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.528 02:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:12.786 02:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.786 02:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:12.786 02:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.786 02:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:13.044 02:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.044 02:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:13.044 02:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:13.302 02:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:13.560 02:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:14.492 02:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:14.492 02:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:14.492 02:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.492 02:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:14.750 02:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.750 02:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:14.750 02:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.750 02:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:15.006 02:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.006 02:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:15.006 02:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.006 02:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:15.265 02:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.265 02:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:15.265 02:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.265 02:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:15.521 02:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.521 02:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:15.521 02:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.521 02:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:15.777 02:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.777 02:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:15.777 02:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.777 02:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:16.034 02:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.034 02:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:16.034 02:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:16.291 02:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:16.547 02:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:17.477 02:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:17.478 02:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:17.478 02:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.478 02:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:17.735 02:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.735 02:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:17.735 02:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.735 02:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:17.992 02:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:17.992 02:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:17.992 02:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.992 02:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:18.249 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.249 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:18.249 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.249 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:18.506 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.506 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:18.506 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:18.506 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.764 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.764 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:18.764 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.764 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:19.019 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:19.019 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2388605 00:31:19.019 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2388605 ']' 00:31:19.019 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2388605 00:31:19.019 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:31:19.019 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:19.019 02:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2388605 00:31:19.019 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:31:19.019 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:31:19.019 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2388605' 00:31:19.019 killing process with pid 2388605 00:31:19.019 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2388605 00:31:19.019 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2388605 00:31:19.278 Connection closed with partial response: 00:31:19.278 00:31:19.278 00:31:19.278 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2388605 00:31:19.278 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:19.279 [2024-07-26 02:06:26.942248] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:31:19.279 [2024-07-26 02:06:26.942333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2388605 ] 00:31:19.279 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.279 [2024-07-26 02:06:27.017802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.279 [2024-07-26 02:06:27.116092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:19.279 Running I/O for 90 seconds... 00:31:19.279 [2024-07-26 02:06:42.832269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.832327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.832415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.832437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.832469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.832486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.832507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.832523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.832560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.832575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.832596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.832610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.832632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.832647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.832668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.832683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.832770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.832792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.832818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.832835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.832857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.832883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.832906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.832922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.832944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.832959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.832981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.832996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.833018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.833033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.833080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.833098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.833981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.834003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.834046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.834111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.834150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.834189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.279 [2024-07-26 02:06:42.834227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.279 [2024-07-26 02:06:42.834266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.279 [2024-07-26 02:06:42.834311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.279 [2024-07-26 02:06:42.834350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.279 [2024-07-26 02:06:42.834403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.279 [2024-07-26 02:06:42.834440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.279 [2024-07-26 02:06:42.834477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.279 [2024-07-26 02:06:42.834513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.279 [2024-07-26 02:06:42.834549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.279 [2024-07-26 02:06:42.834585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.279 [2024-07-26 02:06:42.834622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.279 [2024-07-26 02:06:42.834659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:19.279 [2024-07-26 02:06:42.834728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.279 [2024-07-26 02:06:42.834747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.834773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.834789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.834817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.834833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.834857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.834872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.834895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.280 [2024-07-26 02:06:42.834910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.834933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.834948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.834971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.834986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.835983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.835998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.836021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.836036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.836080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.836099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.836124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.836140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.836164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.836180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.836204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.836220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.836244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.280 [2024-07-26 02:06:42.836260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:19.280 [2024-07-26 02:06:42.836283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.836303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.836327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.836344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.836382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.836398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.836422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.836438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.836461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.836476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.836499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.836515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.836539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.836554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.836577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.836592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.836615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.836631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.836758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.836793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.836825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.836842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.836870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.836886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.836913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.836929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.836961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.836978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.837971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.281 [2024-07-26 02:06:42.837987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:19.281 [2024-07-26 02:06:42.838013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.282 [2024-07-26 02:06:42.838145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.282 [2024-07-26 02:06:42.838189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:42.838892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:42.838909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.426718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.282 [2024-07-26 02:06:58.426794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.426872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.282 [2024-07-26 02:06:58.426909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.426933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.282 [2024-07-26 02:06:58.426948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.426969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.282 [2024-07-26 02:06:58.426984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.427004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.282 [2024-07-26 02:06:58.427019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.427051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.282 [2024-07-26 02:06:58.427091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.427115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.282 [2024-07-26 02:06:58.427131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.427152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:58.427167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.427189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:58.427204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.427225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:58.427240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.427261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:58.427275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.427296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:58.427345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.427371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.282 [2024-07-26 02:06:58.427403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.427884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.282 [2024-07-26 02:06:58.427906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.427930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.282 [2024-07-26 02:06:58.427961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.427985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.282 [2024-07-26 02:06:58.428000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.428021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.282 [2024-07-26 02:06:58.428037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.282 [2024-07-26 02:06:58.428072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.282 [2024-07-26 02:06:58.428090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.428112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.283 [2024-07-26 02:06:58.428128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.428155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.283 [2024-07-26 02:06:58.428172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.428194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.283 [2024-07-26 02:06:58.428210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.428232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.283 [2024-07-26 02:06:58.428248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.428622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.283 [2024-07-26 02:06:58.428646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.428673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.283 [2024-07-26 02:06:58.428691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.428714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.283 [2024-07-26 02:06:58.428729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.428751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.283 [2024-07-26 02:06:58.428767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.428788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.283 [2024-07-26 02:06:58.428804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.428826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.283 [2024-07-26 02:06:58.428842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.428864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.283 [2024-07-26 02:06:58.428879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.428906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.283 [2024-07-26 02:06:58.428923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.428944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.283 [2024-07-26 02:06:58.428960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.428981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.283 [2024-07-26 02:06:58.429013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.283 [2024-07-26 02:06:58.429050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.283 [2024-07-26 02:06:58.429113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.283 [2024-07-26 02:06:58.429151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.283 [2024-07-26 02:06:58.429188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.283 [2024-07-26 02:06:58.429226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.283 [2024-07-26 02:06:58.429263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.283 [2024-07-26 02:06:58.429300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.283 [2024-07-26 02:06:58.429337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.283 [2024-07-26 02:06:58.429374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.283 [2024-07-26 02:06:58.429437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.283 [2024-07-26 02:06:58.429492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.283 [2024-07-26 02:06:58.429529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.283 [2024-07-26 02:06:58.429567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.283 [2024-07-26 02:06:58.429604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.283 [2024-07-26 02:06:58.429641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.283 [2024-07-26 02:06:58.429678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.283 [2024-07-26 02:06:58.429715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:19.283 [2024-07-26 02:06:58.429737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.429752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.429774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.429804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.429826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.429842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.429862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.429877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.429898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.284 [2024-07-26 02:06:58.429917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.429939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.284 [2024-07-26 02:06:58.429954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.429975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.284 [2024-07-26 02:06:58.429990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.430011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.284 [2024-07-26 02:06:58.430026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.430072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.430090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.430112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.430128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.430151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.430166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.430188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.284 [2024-07-26 02:06:58.430203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.430225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.284 [2024-07-26 02:06:58.430241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.430263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.284 [2024-07-26 02:06:58.430278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.430300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.284 [2024-07-26 02:06:58.430316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.430338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.284 [2024-07-26 02:06:58.430369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.430391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.284 [2024-07-26 02:06:58.430407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.430431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.284 [2024-07-26 02:06:58.430447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.430469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.430485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.432573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.432598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.432640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.432658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.432679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.432695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.432716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.432731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.432752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.432768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.432789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.432804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.432825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.432840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.432861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.432876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.432897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.432912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.432933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.432949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:19.284 [2024-07-26 02:06:58.432975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.284 [2024-07-26 02:06:58.432991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:19.284 Received shutdown signal, test time was about 32.458599 seconds 00:31:19.284 00:31:19.284 Latency(us) 00:31:19.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.284 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:19.284 Verification LBA range: start 0x0 length 0x4000 00:31:19.284 Nvme0n1 : 32.46 8211.75 32.08 0.00 0.00 15560.80 242.73 4026531.84 00:31:19.284 =================================================================================================================== 00:31:19.284 Total : 8211.75 32.08 0.00 0.00 15560.80 242.73 4026531.84 00:31:19.284 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:19.541 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:19.541 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:19.541 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:19.541 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:19.541 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:19.541 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:19.541 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:19.541 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:19.541 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:19.541 rmmod nvme_tcp 00:31:19.541 rmmod nvme_fabrics 00:31:19.541 rmmod nvme_keyring 00:31:19.541 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:19.798 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:19.798 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:19.798 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2388325 ']' 00:31:19.798 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2388325 00:31:19.798 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2388325 ']' 00:31:19.798 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2388325 00:31:19.798 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:31:19.798 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:19.798 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2388325 00:31:19.798 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:19.798 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:19.798 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2388325' 00:31:19.798 killing process with pid 2388325 00:31:19.798 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2388325 00:31:19.798 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2388325 00:31:20.056 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:20.057 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:20.057 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:20.057 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:20.057 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:20.057 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.057 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.057 02:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:21.952 00:31:21.952 real 0m41.050s 00:31:21.952 user 1m58.666s 00:31:21.952 sys 0m12.777s 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:21.952 ************************************ 00:31:21.952 END TEST nvmf_host_multipath_status 00:31:21.952 ************************************ 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.952 ************************************ 00:31:21.952 START TEST nvmf_discovery_remove_ifc 00:31:21.952 ************************************ 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:21.952 * Looking for test storage... 00:31:21.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.952 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:22.210 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:22.211 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:22.211 02:07:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.107 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:24.108 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:24.108 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:24.108 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:24.108 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:24.108 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:24.109 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:24.109 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:24.109 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:24.109 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:24.109 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:24.109 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:24.109 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:24.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:24.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:31:24.109 00:31:24.109 --- 10.0.0.2 ping statistics --- 00:31:24.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.109 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:31:24.109 02:07:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:24.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:24.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:31:24.109 00:31:24.109 --- 10.0.0.1 ping statistics --- 00:31:24.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.109 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2394677 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2394677 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2394677 ']' 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:24.109 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.109 [2024-07-26 02:07:06.075789] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:31:24.109 [2024-07-26 02:07:06.075869] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:24.109 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.371 [2024-07-26 02:07:06.140776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.371 [2024-07-26 02:07:06.228199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:24.371 [2024-07-26 02:07:06.228259] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:24.371 [2024-07-26 02:07:06.228273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:24.371 [2024-07-26 02:07:06.228285] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:24.371 [2024-07-26 02:07:06.228295] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:24.371 [2024-07-26 02:07:06.228337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.371 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:24.371 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:31:24.371 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:24.371 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:24.371 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.371 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.371 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:24.371 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.371 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.371 [2024-07-26 02:07:06.363420] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.371 [2024-07-26 02:07:06.371583] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:24.675 null0 00:31:24.675 [2024-07-26 02:07:06.403543] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.675 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.675 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2394818 00:31:24.675 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:24.675 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2394818 /tmp/host.sock 00:31:24.675 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2394818 ']' 00:31:24.675 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:31:24.675 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:24.675 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:24.675 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:24.675 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:24.675 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.675 [2024-07-26 02:07:06.466609] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:31:24.675 [2024-07-26 02:07:06.466677] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2394818 ] 00:31:24.675 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.675 [2024-07-26 02:07:06.527439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.675 [2024-07-26 02:07:06.618397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.934 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:24.934 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:31:24.934 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:24.934 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:24.934 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.934 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.934 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.934 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:24.934 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.934 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.934 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.934 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:24.934 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.934 02:07:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:25.867 [2024-07-26 02:07:07.832859] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:25.867 [2024-07-26 02:07:07.832886] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:25.867 [2024-07-26 02:07:07.832911] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:26.125 [2024-07-26 02:07:07.961344] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:26.382 [2024-07-26 02:07:08.144206] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:26.382 [2024-07-26 02:07:08.144267] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:26.382 [2024-07-26 02:07:08.144310] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:26.382 [2024-07-26 02:07:08.144331] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:26.382 [2024-07-26 02:07:08.144353] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:26.382 [2024-07-26 02:07:08.149699] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xddd300 was disconnected and freed. delete nvme_qpair. 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:26.382 02:07:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:27.315 02:07:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:27.315 02:07:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.315 02:07:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:27.315 02:07:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.315 02:07:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:27.315 02:07:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:27.315 02:07:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:27.315 02:07:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.315 02:07:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:27.315 02:07:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:28.688 02:07:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:28.688 02:07:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:28.688 02:07:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:28.688 02:07:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.688 02:07:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:28.688 02:07:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:28.688 02:07:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:28.688 02:07:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.688 02:07:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:28.688 02:07:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:29.625 02:07:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:29.625 02:07:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.625 02:07:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:29.625 02:07:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.625 02:07:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:29.625 02:07:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:29.625 02:07:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:29.625 02:07:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.625 02:07:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:29.625 02:07:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:30.564 02:07:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:30.564 02:07:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.564 02:07:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:30.564 02:07:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.564 02:07:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:30.564 02:07:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:30.564 02:07:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:30.564 02:07:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.564 02:07:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:30.564 02:07:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:31.504 02:07:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:31.504 02:07:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:31.504 02:07:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:31.504 02:07:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.504 02:07:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:31.504 02:07:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:31.504 02:07:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:31.504 02:07:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.504 02:07:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:31.504 02:07:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:31.767 [2024-07-26 02:07:13.585912] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:31.767 [2024-07-26 02:07:13.585993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.767 [2024-07-26 02:07:13.586018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.767 [2024-07-26 02:07:13.586038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.767 [2024-07-26 02:07:13.586054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.767 [2024-07-26 02:07:13.586080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.767 [2024-07-26 02:07:13.586109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.767 [2024-07-26 02:07:13.586123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.767 [2024-07-26 02:07:13.586135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.767 [2024-07-26 02:07:13.586148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.768 [2024-07-26 02:07:13.586160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.768 [2024-07-26 02:07:13.586173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3d00 is same with the state(5) to be set 00:31:31.768 [2024-07-26 02:07:13.595931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda3d00 (9): Bad file descriptor 00:31:31.768 [2024-07-26 02:07:13.605977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.701 02:07:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:32.701 02:07:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.701 02:07:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.701 02:07:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:32.701 02:07:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:32.701 02:07:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:32.701 02:07:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:32.701 [2024-07-26 02:07:14.637095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:32.701 [2024-07-26 02:07:14.637154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda3d00 with addr=10.0.0.2, port=4420 00:31:32.701 [2024-07-26 02:07:14.637177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3d00 is same with the state(5) to be set 00:31:32.701 [2024-07-26 02:07:14.637211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda3d00 (9): Bad file descriptor 00:31:32.701 [2024-07-26 02:07:14.637637] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:32.701 [2024-07-26 02:07:14.637682] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.702 [2024-07-26 02:07:14.637703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.702 [2024-07-26 02:07:14.637722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.702 [2024-07-26 02:07:14.637747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.702 [2024-07-26 02:07:14.637766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.702 02:07:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.702 02:07:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:32.702 02:07:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:33.637 [2024-07-26 02:07:15.640260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:33.637 [2024-07-26 02:07:15.640288] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:33.637 [2024-07-26 02:07:15.640302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:33.637 [2024-07-26 02:07:15.640313] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:33.637 [2024-07-26 02:07:15.640346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:33.637 [2024-07-26 02:07:15.640385] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:33.637 [2024-07-26 02:07:15.640421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.637 [2024-07-26 02:07:15.640444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.637 [2024-07-26 02:07:15.640463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.637 [2024-07-26 02:07:15.640478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.637 [2024-07-26 02:07:15.640495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.637 [2024-07-26 02:07:15.640510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.637 [2024-07-26 02:07:15.640526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.637 [2024-07-26 02:07:15.640541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.637 [2024-07-26 02:07:15.640558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.637 [2024-07-26 02:07:15.640574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.637 [2024-07-26 02:07:15.640589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:33.637 [2024-07-26 02:07:15.640806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda3160 (9): Bad file descriptor 00:31:33.637 [2024-07-26 02:07:15.641830] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:33.637 [2024-07-26 02:07:15.641856] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:33.897 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:33.897 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:33.897 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:33.897 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.897 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:33.897 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:33.897 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:33.898 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.898 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:33.898 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:33.898 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:33.898 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:33.898 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:33.898 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:33.898 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.898 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:33.898 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:33.898 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:33.898 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:33.898 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.898 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:33.898 02:07:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:34.833 02:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:34.833 02:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.833 02:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.834 02:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:34.834 02:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:34.834 02:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:34.834 02:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:34.834 02:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.834 02:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:34.834 02:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:35.771 [2024-07-26 02:07:17.657042] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:35.771 [2024-07-26 02:07:17.657081] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:35.771 [2024-07-26 02:07:17.657104] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:36.030 [2024-07-26 02:07:17.784560] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:36.030 02:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:36.030 02:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.030 02:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:36.030 02:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.030 02:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:36.030 02:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:36.030 02:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:36.030 02:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.030 02:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:36.030 02:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:36.030 [2024-07-26 02:07:17.970970] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:36.030 [2024-07-26 02:07:17.971027] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:36.030 [2024-07-26 02:07:17.971073] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:36.030 [2024-07-26 02:07:17.971099] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:36.030 [2024-07-26 02:07:17.971113] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:36.030 [2024-07-26 02:07:17.975970] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd922b0 was disconnected and freed. delete nvme_qpair. 00:31:36.960 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:36.960 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.960 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.960 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:36.960 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:36.960 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:36.960 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:36.960 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.960 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:36.960 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:36.960 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2394818 00:31:36.960 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2394818 ']' 00:31:36.960 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2394818 00:31:36.960 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:36.960 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:36.961 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2394818 00:31:36.961 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:36.961 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:36.961 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2394818' 00:31:36.961 killing process with pid 2394818 00:31:36.961 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2394818 00:31:36.961 02:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2394818 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:37.218 rmmod nvme_tcp 00:31:37.218 rmmod nvme_fabrics 00:31:37.218 rmmod nvme_keyring 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2394677 ']' 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2394677 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2394677 ']' 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2394677 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2394677 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2394677' 00:31:37.218 killing process with pid 2394677 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2394677 00:31:37.218 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2394677 00:31:37.476 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:37.476 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:37.476 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:37.476 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:37.476 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:37.476 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.476 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.476 02:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:40.013 00:31:40.013 real 0m17.579s 00:31:40.013 user 0m25.540s 00:31:40.013 sys 0m3.016s 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:40.013 ************************************ 00:31:40.013 END TEST nvmf_discovery_remove_ifc 00:31:40.013 ************************************ 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.013 ************************************ 00:31:40.013 START TEST nvmf_identify_kernel_target 00:31:40.013 ************************************ 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:40.013 * Looking for test storage... 00:31:40.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.013 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:40.014 02:07:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:41.913 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:41.913 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:41.914 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:41.914 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:41.914 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:41.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:41.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:31:41.914 00:31:41.914 --- 10.0.0.2 ping statistics --- 00:31:41.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.914 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:41.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:41.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:31:41.914 00:31:41.914 --- 10.0.0.1 ping statistics --- 00:31:41.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.914 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:41.914 02:07:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:42.881 Waiting for block devices as requested 00:31:42.881 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:43.138 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:43.138 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:43.138 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:43.397 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:43.397 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:43.397 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:43.397 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:43.397 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:43.655 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:43.655 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:43.655 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:43.914 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:43.914 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:43.914 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:43.914 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:44.172 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:44.172 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:44.172 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:44.172 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:44.172 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:44.172 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:44.172 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:44.172 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:44.172 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:44.172 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:44.172 No valid GPT data, bailing 00:31:44.172 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:44.172 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:44.173 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:44.173 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:44.173 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:44.173 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:44.173 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:44.173 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:44.173 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:44.173 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:44.173 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:44.173 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:44.173 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:44.173 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:44.173 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:44.173 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:44.173 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:44.431 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:44.431 00:31:44.431 Discovery Log Number of Records 2, Generation counter 2 00:31:44.431 =====Discovery Log Entry 0====== 00:31:44.431 trtype: tcp 00:31:44.431 adrfam: ipv4 00:31:44.431 subtype: current discovery subsystem 00:31:44.431 treq: not specified, sq flow control disable supported 00:31:44.431 portid: 1 00:31:44.431 trsvcid: 4420 00:31:44.431 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:44.431 traddr: 10.0.0.1 00:31:44.431 eflags: none 00:31:44.431 sectype: none 00:31:44.431 =====Discovery Log Entry 1====== 00:31:44.431 trtype: tcp 00:31:44.431 adrfam: ipv4 00:31:44.431 subtype: nvme subsystem 00:31:44.431 treq: not specified, sq flow control disable supported 00:31:44.431 portid: 1 00:31:44.431 trsvcid: 4420 00:31:44.431 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:44.431 traddr: 10.0.0.1 00:31:44.431 eflags: none 00:31:44.431 sectype: none 00:31:44.431 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:44.431 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:44.431 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.431 ===================================================== 00:31:44.431 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:44.431 ===================================================== 00:31:44.431 Controller Capabilities/Features 00:31:44.431 ================================ 00:31:44.431 Vendor ID: 0000 00:31:44.432 Subsystem Vendor ID: 0000 00:31:44.432 Serial Number: b9c6d4ff8cdc963ea013 00:31:44.432 Model Number: Linux 00:31:44.432 Firmware Version: 6.7.0-68 00:31:44.432 Recommended Arb Burst: 0 00:31:44.432 IEEE OUI Identifier: 00 00 00 00:31:44.432 Multi-path I/O 00:31:44.432 May have multiple subsystem ports: No 00:31:44.432 May have multiple controllers: No 00:31:44.432 Associated with SR-IOV VF: No 00:31:44.432 Max Data Transfer Size: Unlimited 00:31:44.432 Max Number of Namespaces: 0 00:31:44.432 Max Number of I/O Queues: 1024 00:31:44.432 NVMe Specification Version (VS): 1.3 00:31:44.432 NVMe Specification Version (Identify): 1.3 00:31:44.432 Maximum Queue Entries: 1024 00:31:44.432 Contiguous Queues Required: No 00:31:44.432 Arbitration Mechanisms Supported 00:31:44.432 Weighted Round Robin: Not Supported 00:31:44.432 Vendor Specific: Not Supported 00:31:44.432 Reset Timeout: 7500 ms 00:31:44.432 Doorbell Stride: 4 bytes 00:31:44.432 NVM Subsystem Reset: Not Supported 00:31:44.432 Command Sets Supported 00:31:44.432 NVM Command Set: Supported 00:31:44.432 Boot Partition: Not Supported 00:31:44.432 Memory Page Size Minimum: 4096 bytes 00:31:44.432 Memory Page Size Maximum: 4096 bytes 00:31:44.432 Persistent Memory Region: Not Supported 00:31:44.432 Optional Asynchronous Events Supported 00:31:44.432 Namespace Attribute Notices: Not Supported 00:31:44.432 Firmware Activation Notices: Not Supported 00:31:44.432 ANA Change Notices: Not Supported 00:31:44.432 PLE Aggregate Log Change Notices: Not Supported 00:31:44.432 LBA Status Info Alert Notices: Not Supported 00:31:44.432 EGE Aggregate Log Change Notices: Not Supported 00:31:44.432 Normal NVM Subsystem Shutdown event: Not Supported 00:31:44.432 Zone Descriptor Change Notices: Not Supported 00:31:44.432 Discovery Log Change Notices: Supported 00:31:44.432 Controller Attributes 00:31:44.432 128-bit Host Identifier: Not Supported 00:31:44.432 Non-Operational Permissive Mode: Not Supported 00:31:44.432 NVM Sets: Not Supported 00:31:44.432 Read Recovery Levels: Not Supported 00:31:44.432 Endurance Groups: Not Supported 00:31:44.432 Predictable Latency Mode: Not Supported 00:31:44.432 Traffic Based Keep ALive: Not Supported 00:31:44.432 Namespace Granularity: Not Supported 00:31:44.432 SQ Associations: Not Supported 00:31:44.432 UUID List: Not Supported 00:31:44.432 Multi-Domain Subsystem: Not Supported 00:31:44.432 Fixed Capacity Management: Not Supported 00:31:44.432 Variable Capacity Management: Not Supported 00:31:44.432 Delete Endurance Group: Not Supported 00:31:44.432 Delete NVM Set: Not Supported 00:31:44.432 Extended LBA Formats Supported: Not Supported 00:31:44.432 Flexible Data Placement Supported: Not Supported 00:31:44.432 00:31:44.432 Controller Memory Buffer Support 00:31:44.432 ================================ 00:31:44.432 Supported: No 00:31:44.432 00:31:44.432 Persistent Memory Region Support 00:31:44.432 ================================ 00:31:44.432 Supported: No 00:31:44.432 00:31:44.432 Admin Command Set Attributes 00:31:44.432 ============================ 00:31:44.432 Security Send/Receive: Not Supported 00:31:44.432 Format NVM: Not Supported 00:31:44.432 Firmware Activate/Download: Not Supported 00:31:44.432 Namespace Management: Not Supported 00:31:44.432 Device Self-Test: Not Supported 00:31:44.432 Directives: Not Supported 00:31:44.432 NVMe-MI: Not Supported 00:31:44.432 Virtualization Management: Not Supported 00:31:44.432 Doorbell Buffer Config: Not Supported 00:31:44.432 Get LBA Status Capability: Not Supported 00:31:44.432 Command & Feature Lockdown Capability: Not Supported 00:31:44.432 Abort Command Limit: 1 00:31:44.432 Async Event Request Limit: 1 00:31:44.432 Number of Firmware Slots: N/A 00:31:44.432 Firmware Slot 1 Read-Only: N/A 00:31:44.432 Firmware Activation Without Reset: N/A 00:31:44.432 Multiple Update Detection Support: N/A 00:31:44.432 Firmware Update Granularity: No Information Provided 00:31:44.432 Per-Namespace SMART Log: No 00:31:44.432 Asymmetric Namespace Access Log Page: Not Supported 00:31:44.432 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:44.432 Command Effects Log Page: Not Supported 00:31:44.432 Get Log Page Extended Data: Supported 00:31:44.432 Telemetry Log Pages: Not Supported 00:31:44.432 Persistent Event Log Pages: Not Supported 00:31:44.432 Supported Log Pages Log Page: May Support 00:31:44.432 Commands Supported & Effects Log Page: Not Supported 00:31:44.432 Feature Identifiers & Effects Log Page:May Support 00:31:44.432 NVMe-MI Commands & Effects Log Page: May Support 00:31:44.432 Data Area 4 for Telemetry Log: Not Supported 00:31:44.432 Error Log Page Entries Supported: 1 00:31:44.432 Keep Alive: Not Supported 00:31:44.432 00:31:44.432 NVM Command Set Attributes 00:31:44.432 ========================== 00:31:44.432 Submission Queue Entry Size 00:31:44.432 Max: 1 00:31:44.432 Min: 1 00:31:44.432 Completion Queue Entry Size 00:31:44.432 Max: 1 00:31:44.432 Min: 1 00:31:44.432 Number of Namespaces: 0 00:31:44.432 Compare Command: Not Supported 00:31:44.432 Write Uncorrectable Command: Not Supported 00:31:44.432 Dataset Management Command: Not Supported 00:31:44.432 Write Zeroes Command: Not Supported 00:31:44.432 Set Features Save Field: Not Supported 00:31:44.432 Reservations: Not Supported 00:31:44.432 Timestamp: Not Supported 00:31:44.432 Copy: Not Supported 00:31:44.432 Volatile Write Cache: Not Present 00:31:44.432 Atomic Write Unit (Normal): 1 00:31:44.432 Atomic Write Unit (PFail): 1 00:31:44.432 Atomic Compare & Write Unit: 1 00:31:44.432 Fused Compare & Write: Not Supported 00:31:44.432 Scatter-Gather List 00:31:44.432 SGL Command Set: Supported 00:31:44.432 SGL Keyed: Not Supported 00:31:44.432 SGL Bit Bucket Descriptor: Not Supported 00:31:44.432 SGL Metadata Pointer: Not Supported 00:31:44.432 Oversized SGL: Not Supported 00:31:44.432 SGL Metadata Address: Not Supported 00:31:44.432 SGL Offset: Supported 00:31:44.432 Transport SGL Data Block: Not Supported 00:31:44.432 Replay Protected Memory Block: Not Supported 00:31:44.432 00:31:44.432 Firmware Slot Information 00:31:44.432 ========================= 00:31:44.432 Active slot: 0 00:31:44.432 00:31:44.432 00:31:44.432 Error Log 00:31:44.432 ========= 00:31:44.432 00:31:44.432 Active Namespaces 00:31:44.432 ================= 00:31:44.432 Discovery Log Page 00:31:44.432 ================== 00:31:44.432 Generation Counter: 2 00:31:44.432 Number of Records: 2 00:31:44.432 Record Format: 0 00:31:44.432 00:31:44.432 Discovery Log Entry 0 00:31:44.432 ---------------------- 00:31:44.432 Transport Type: 3 (TCP) 00:31:44.432 Address Family: 1 (IPv4) 00:31:44.432 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:44.432 Entry Flags: 00:31:44.432 Duplicate Returned Information: 0 00:31:44.432 Explicit Persistent Connection Support for Discovery: 0 00:31:44.432 Transport Requirements: 00:31:44.432 Secure Channel: Not Specified 00:31:44.432 Port ID: 1 (0x0001) 00:31:44.432 Controller ID: 65535 (0xffff) 00:31:44.432 Admin Max SQ Size: 32 00:31:44.432 Transport Service Identifier: 4420 00:31:44.432 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:44.432 Transport Address: 10.0.0.1 00:31:44.432 Discovery Log Entry 1 00:31:44.432 ---------------------- 00:31:44.432 Transport Type: 3 (TCP) 00:31:44.432 Address Family: 1 (IPv4) 00:31:44.432 Subsystem Type: 2 (NVM Subsystem) 00:31:44.432 Entry Flags: 00:31:44.432 Duplicate Returned Information: 0 00:31:44.432 Explicit Persistent Connection Support for Discovery: 0 00:31:44.432 Transport Requirements: 00:31:44.432 Secure Channel: Not Specified 00:31:44.432 Port ID: 1 (0x0001) 00:31:44.432 Controller ID: 65535 (0xffff) 00:31:44.432 Admin Max SQ Size: 32 00:31:44.432 Transport Service Identifier: 4420 00:31:44.432 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:44.432 Transport Address: 10.0.0.1 00:31:44.432 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:44.433 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.691 get_feature(0x01) failed 00:31:44.691 get_feature(0x02) failed 00:31:44.691 get_feature(0x04) failed 00:31:44.691 ===================================================== 00:31:44.691 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:44.691 ===================================================== 00:31:44.691 Controller Capabilities/Features 00:31:44.691 ================================ 00:31:44.691 Vendor ID: 0000 00:31:44.691 Subsystem Vendor ID: 0000 00:31:44.691 Serial Number: 1139c05d9747736e8e6f 00:31:44.691 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:44.691 Firmware Version: 6.7.0-68 00:31:44.691 Recommended Arb Burst: 6 00:31:44.691 IEEE OUI Identifier: 00 00 00 00:31:44.691 Multi-path I/O 00:31:44.691 May have multiple subsystem ports: Yes 00:31:44.691 May have multiple controllers: Yes 00:31:44.691 Associated with SR-IOV VF: No 00:31:44.691 Max Data Transfer Size: Unlimited 00:31:44.691 Max Number of Namespaces: 1024 00:31:44.691 Max Number of I/O Queues: 128 00:31:44.691 NVMe Specification Version (VS): 1.3 00:31:44.691 NVMe Specification Version (Identify): 1.3 00:31:44.691 Maximum Queue Entries: 1024 00:31:44.691 Contiguous Queues Required: No 00:31:44.691 Arbitration Mechanisms Supported 00:31:44.691 Weighted Round Robin: Not Supported 00:31:44.691 Vendor Specific: Not Supported 00:31:44.691 Reset Timeout: 7500 ms 00:31:44.691 Doorbell Stride: 4 bytes 00:31:44.691 NVM Subsystem Reset: Not Supported 00:31:44.691 Command Sets Supported 00:31:44.691 NVM Command Set: Supported 00:31:44.691 Boot Partition: Not Supported 00:31:44.691 Memory Page Size Minimum: 4096 bytes 00:31:44.691 Memory Page Size Maximum: 4096 bytes 00:31:44.691 Persistent Memory Region: Not Supported 00:31:44.691 Optional Asynchronous Events Supported 00:31:44.691 Namespace Attribute Notices: Supported 00:31:44.691 Firmware Activation Notices: Not Supported 00:31:44.691 ANA Change Notices: Supported 00:31:44.691 PLE Aggregate Log Change Notices: Not Supported 00:31:44.691 LBA Status Info Alert Notices: Not Supported 00:31:44.691 EGE Aggregate Log Change Notices: Not Supported 00:31:44.691 Normal NVM Subsystem Shutdown event: Not Supported 00:31:44.691 Zone Descriptor Change Notices: Not Supported 00:31:44.691 Discovery Log Change Notices: Not Supported 00:31:44.691 Controller Attributes 00:31:44.691 128-bit Host Identifier: Supported 00:31:44.691 Non-Operational Permissive Mode: Not Supported 00:31:44.691 NVM Sets: Not Supported 00:31:44.691 Read Recovery Levels: Not Supported 00:31:44.691 Endurance Groups: Not Supported 00:31:44.691 Predictable Latency Mode: Not Supported 00:31:44.691 Traffic Based Keep ALive: Supported 00:31:44.691 Namespace Granularity: Not Supported 00:31:44.691 SQ Associations: Not Supported 00:31:44.691 UUID List: Not Supported 00:31:44.691 Multi-Domain Subsystem: Not Supported 00:31:44.691 Fixed Capacity Management: Not Supported 00:31:44.691 Variable Capacity Management: Not Supported 00:31:44.691 Delete Endurance Group: Not Supported 00:31:44.691 Delete NVM Set: Not Supported 00:31:44.691 Extended LBA Formats Supported: Not Supported 00:31:44.691 Flexible Data Placement Supported: Not Supported 00:31:44.691 00:31:44.691 Controller Memory Buffer Support 00:31:44.691 ================================ 00:31:44.691 Supported: No 00:31:44.691 00:31:44.691 Persistent Memory Region Support 00:31:44.691 ================================ 00:31:44.691 Supported: No 00:31:44.691 00:31:44.691 Admin Command Set Attributes 00:31:44.691 ============================ 00:31:44.691 Security Send/Receive: Not Supported 00:31:44.691 Format NVM: Not Supported 00:31:44.691 Firmware Activate/Download: Not Supported 00:31:44.691 Namespace Management: Not Supported 00:31:44.691 Device Self-Test: Not Supported 00:31:44.691 Directives: Not Supported 00:31:44.691 NVMe-MI: Not Supported 00:31:44.691 Virtualization Management: Not Supported 00:31:44.691 Doorbell Buffer Config: Not Supported 00:31:44.691 Get LBA Status Capability: Not Supported 00:31:44.691 Command & Feature Lockdown Capability: Not Supported 00:31:44.691 Abort Command Limit: 4 00:31:44.691 Async Event Request Limit: 4 00:31:44.691 Number of Firmware Slots: N/A 00:31:44.691 Firmware Slot 1 Read-Only: N/A 00:31:44.691 Firmware Activation Without Reset: N/A 00:31:44.691 Multiple Update Detection Support: N/A 00:31:44.691 Firmware Update Granularity: No Information Provided 00:31:44.691 Per-Namespace SMART Log: Yes 00:31:44.691 Asymmetric Namespace Access Log Page: Supported 00:31:44.691 ANA Transition Time : 10 sec 00:31:44.691 00:31:44.691 Asymmetric Namespace Access Capabilities 00:31:44.691 ANA Optimized State : Supported 00:31:44.691 ANA Non-Optimized State : Supported 00:31:44.691 ANA Inaccessible State : Supported 00:31:44.691 ANA Persistent Loss State : Supported 00:31:44.691 ANA Change State : Supported 00:31:44.691 ANAGRPID is not changed : No 00:31:44.691 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:44.691 00:31:44.691 ANA Group Identifier Maximum : 128 00:31:44.691 Number of ANA Group Identifiers : 128 00:31:44.691 Max Number of Allowed Namespaces : 1024 00:31:44.691 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:44.691 Command Effects Log Page: Supported 00:31:44.691 Get Log Page Extended Data: Supported 00:31:44.691 Telemetry Log Pages: Not Supported 00:31:44.691 Persistent Event Log Pages: Not Supported 00:31:44.691 Supported Log Pages Log Page: May Support 00:31:44.691 Commands Supported & Effects Log Page: Not Supported 00:31:44.691 Feature Identifiers & Effects Log Page:May Support 00:31:44.691 NVMe-MI Commands & Effects Log Page: May Support 00:31:44.691 Data Area 4 for Telemetry Log: Not Supported 00:31:44.691 Error Log Page Entries Supported: 128 00:31:44.691 Keep Alive: Supported 00:31:44.692 Keep Alive Granularity: 1000 ms 00:31:44.692 00:31:44.692 NVM Command Set Attributes 00:31:44.692 ========================== 00:31:44.692 Submission Queue Entry Size 00:31:44.692 Max: 64 00:31:44.692 Min: 64 00:31:44.692 Completion Queue Entry Size 00:31:44.692 Max: 16 00:31:44.692 Min: 16 00:31:44.692 Number of Namespaces: 1024 00:31:44.692 Compare Command: Not Supported 00:31:44.692 Write Uncorrectable Command: Not Supported 00:31:44.692 Dataset Management Command: Supported 00:31:44.692 Write Zeroes Command: Supported 00:31:44.692 Set Features Save Field: Not Supported 00:31:44.692 Reservations: Not Supported 00:31:44.692 Timestamp: Not Supported 00:31:44.692 Copy: Not Supported 00:31:44.692 Volatile Write Cache: Present 00:31:44.692 Atomic Write Unit (Normal): 1 00:31:44.692 Atomic Write Unit (PFail): 1 00:31:44.692 Atomic Compare & Write Unit: 1 00:31:44.692 Fused Compare & Write: Not Supported 00:31:44.692 Scatter-Gather List 00:31:44.692 SGL Command Set: Supported 00:31:44.692 SGL Keyed: Not Supported 00:31:44.692 SGL Bit Bucket Descriptor: Not Supported 00:31:44.692 SGL Metadata Pointer: Not Supported 00:31:44.692 Oversized SGL: Not Supported 00:31:44.692 SGL Metadata Address: Not Supported 00:31:44.692 SGL Offset: Supported 00:31:44.692 Transport SGL Data Block: Not Supported 00:31:44.692 Replay Protected Memory Block: Not Supported 00:31:44.692 00:31:44.692 Firmware Slot Information 00:31:44.692 ========================= 00:31:44.692 Active slot: 0 00:31:44.692 00:31:44.692 Asymmetric Namespace Access 00:31:44.692 =========================== 00:31:44.692 Change Count : 0 00:31:44.692 Number of ANA Group Descriptors : 1 00:31:44.692 ANA Group Descriptor : 0 00:31:44.692 ANA Group ID : 1 00:31:44.692 Number of NSID Values : 1 00:31:44.692 Change Count : 0 00:31:44.692 ANA State : 1 00:31:44.692 Namespace Identifier : 1 00:31:44.692 00:31:44.692 Commands Supported and Effects 00:31:44.692 ============================== 00:31:44.692 Admin Commands 00:31:44.692 -------------- 00:31:44.692 Get Log Page (02h): Supported 00:31:44.692 Identify (06h): Supported 00:31:44.692 Abort (08h): Supported 00:31:44.692 Set Features (09h): Supported 00:31:44.692 Get Features (0Ah): Supported 00:31:44.692 Asynchronous Event Request (0Ch): Supported 00:31:44.692 Keep Alive (18h): Supported 00:31:44.692 I/O Commands 00:31:44.692 ------------ 00:31:44.692 Flush (00h): Supported 00:31:44.692 Write (01h): Supported LBA-Change 00:31:44.692 Read (02h): Supported 00:31:44.692 Write Zeroes (08h): Supported LBA-Change 00:31:44.692 Dataset Management (09h): Supported 00:31:44.692 00:31:44.692 Error Log 00:31:44.692 ========= 00:31:44.692 Entry: 0 00:31:44.692 Error Count: 0x3 00:31:44.692 Submission Queue Id: 0x0 00:31:44.692 Command Id: 0x5 00:31:44.692 Phase Bit: 0 00:31:44.692 Status Code: 0x2 00:31:44.692 Status Code Type: 0x0 00:31:44.692 Do Not Retry: 1 00:31:44.692 Error Location: 0x28 00:31:44.692 LBA: 0x0 00:31:44.692 Namespace: 0x0 00:31:44.692 Vendor Log Page: 0x0 00:31:44.692 ----------- 00:31:44.692 Entry: 1 00:31:44.692 Error Count: 0x2 00:31:44.692 Submission Queue Id: 0x0 00:31:44.692 Command Id: 0x5 00:31:44.692 Phase Bit: 0 00:31:44.692 Status Code: 0x2 00:31:44.692 Status Code Type: 0x0 00:31:44.692 Do Not Retry: 1 00:31:44.692 Error Location: 0x28 00:31:44.692 LBA: 0x0 00:31:44.692 Namespace: 0x0 00:31:44.692 Vendor Log Page: 0x0 00:31:44.692 ----------- 00:31:44.692 Entry: 2 00:31:44.692 Error Count: 0x1 00:31:44.692 Submission Queue Id: 0x0 00:31:44.692 Command Id: 0x4 00:31:44.692 Phase Bit: 0 00:31:44.692 Status Code: 0x2 00:31:44.692 Status Code Type: 0x0 00:31:44.692 Do Not Retry: 1 00:31:44.692 Error Location: 0x28 00:31:44.692 LBA: 0x0 00:31:44.692 Namespace: 0x0 00:31:44.692 Vendor Log Page: 0x0 00:31:44.692 00:31:44.692 Number of Queues 00:31:44.692 ================ 00:31:44.692 Number of I/O Submission Queues: 128 00:31:44.692 Number of I/O Completion Queues: 128 00:31:44.692 00:31:44.692 ZNS Specific Controller Data 00:31:44.692 ============================ 00:31:44.692 Zone Append Size Limit: 0 00:31:44.692 00:31:44.692 00:31:44.692 Active Namespaces 00:31:44.692 ================= 00:31:44.692 get_feature(0x05) failed 00:31:44.692 Namespace ID:1 00:31:44.692 Command Set Identifier: NVM (00h) 00:31:44.692 Deallocate: Supported 00:31:44.692 Deallocated/Unwritten Error: Not Supported 00:31:44.692 Deallocated Read Value: Unknown 00:31:44.692 Deallocate in Write Zeroes: Not Supported 00:31:44.692 Deallocated Guard Field: 0xFFFF 00:31:44.692 Flush: Supported 00:31:44.692 Reservation: Not Supported 00:31:44.692 Namespace Sharing Capabilities: Multiple Controllers 00:31:44.692 Size (in LBAs): 1953525168 (931GiB) 00:31:44.692 Capacity (in LBAs): 1953525168 (931GiB) 00:31:44.692 Utilization (in LBAs): 1953525168 (931GiB) 00:31:44.692 UUID: 25a543b7-59ca-48df-a73e-31d53d45ffbb 00:31:44.692 Thin Provisioning: Not Supported 00:31:44.692 Per-NS Atomic Units: Yes 00:31:44.692 Atomic Boundary Size (Normal): 0 00:31:44.692 Atomic Boundary Size (PFail): 0 00:31:44.692 Atomic Boundary Offset: 0 00:31:44.692 NGUID/EUI64 Never Reused: No 00:31:44.692 ANA group ID: 1 00:31:44.692 Namespace Write Protected: No 00:31:44.692 Number of LBA Formats: 1 00:31:44.692 Current LBA Format: LBA Format #00 00:31:44.692 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:44.692 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:44.692 rmmod nvme_tcp 00:31:44.692 rmmod nvme_fabrics 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:44.692 02:07:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.596 02:07:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:46.596 02:07:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:46.596 02:07:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:46.596 02:07:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:46.596 02:07:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:46.596 02:07:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:46.596 02:07:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:46.596 02:07:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:46.596 02:07:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:46.596 02:07:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:46.596 02:07:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:47.974 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:47.974 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:47.974 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:47.974 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:47.974 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:47.974 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:47.974 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:47.974 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:47.974 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:47.974 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:47.974 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:47.974 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:47.974 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:47.974 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:47.974 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:47.974 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:48.922 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:49.180 00:31:49.180 real 0m9.441s 00:31:49.180 user 0m1.989s 00:31:49.180 sys 0m3.368s 00:31:49.180 02:07:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:49.180 02:07:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:49.180 ************************************ 00:31:49.180 END TEST nvmf_identify_kernel_target 00:31:49.180 ************************************ 00:31:49.180 02:07:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:49.180 02:07:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:49.180 02:07:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:49.180 02:07:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.180 ************************************ 00:31:49.180 START TEST nvmf_auth_host 00:31:49.180 ************************************ 00:31:49.180 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:49.180 * Looking for test storage... 00:31:49.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:49.180 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.180 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:49.181 02:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:51.080 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:51.081 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:51.081 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:51.081 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:51.081 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:51.081 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:51.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:51.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:31:51.339 00:31:51.339 --- 10.0.0.2 ping statistics --- 00:31:51.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.339 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:51.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:51.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:31:51.339 00:31:51.339 --- 10.0.0.1 ping statistics --- 00:31:51.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.339 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2401908 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2401908 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2401908 ']' 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:51.339 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.596 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:51.596 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:51.596 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:51.596 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:51.596 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.853 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:51.853 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:51.853 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:51.853 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:51.853 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.853 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:51.853 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:51.853 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:51.853 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8c50b71b5aa72689886153d6f7a30362 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.PRf 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8c50b71b5aa72689886153d6f7a30362 0 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8c50b71b5aa72689886153d6f7a30362 0 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8c50b71b5aa72689886153d6f7a30362 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.PRf 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.PRf 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.PRf 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8465294c19c20ea728fa61feb33a0f40d2da2d18ecaf3834833144f77d10adfb 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2m5 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8465294c19c20ea728fa61feb33a0f40d2da2d18ecaf3834833144f77d10adfb 3 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8465294c19c20ea728fa61feb33a0f40d2da2d18ecaf3834833144f77d10adfb 3 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8465294c19c20ea728fa61feb33a0f40d2da2d18ecaf3834833144f77d10adfb 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2m5 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2m5 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.2m5 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2567f64181b6a55bfecf39842647da4e3f454b79586aae33 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.qjF 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2567f64181b6a55bfecf39842647da4e3f454b79586aae33 0 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2567f64181b6a55bfecf39842647da4e3f454b79586aae33 0 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2567f64181b6a55bfecf39842647da4e3f454b79586aae33 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.qjF 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.qjF 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.qjF 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8ab058411f521a2c2e72c4aec09044ea24ffe4b7f3e677fc 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.nad 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8ab058411f521a2c2e72c4aec09044ea24ffe4b7f3e677fc 2 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8ab058411f521a2c2e72c4aec09044ea24ffe4b7f3e677fc 2 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8ab058411f521a2c2e72c4aec09044ea24ffe4b7f3e677fc 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.nad 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.nad 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.nad 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=381a348f7760695c9f406e203d4d0d8d 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0NK 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 381a348f7760695c9f406e203d4d0d8d 1 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 381a348f7760695c9f406e203d4d0d8d 1 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=381a348f7760695c9f406e203d4d0d8d 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:51.854 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0NK 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0NK 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.0NK 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ac1174704979b8071859adf3e00db2d3 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.bdy 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ac1174704979b8071859adf3e00db2d3 1 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ac1174704979b8071859adf3e00db2d3 1 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ac1174704979b8071859adf3e00db2d3 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.bdy 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.bdy 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.bdy 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d3b0086b03d6168fe9672a466fd3cd82d9d745041c1d7945 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.kqK 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d3b0086b03d6168fe9672a466fd3cd82d9d745041c1d7945 2 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d3b0086b03d6168fe9672a466fd3cd82d9d745041c1d7945 2 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d3b0086b03d6168fe9672a466fd3cd82d9d745041c1d7945 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:52.113 02:07:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.kqK 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.kqK 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.kqK 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=de40f69854da6b073bdddb6acabf5e86 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.EBX 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key de40f69854da6b073bdddb6acabf5e86 0 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 de40f69854da6b073bdddb6acabf5e86 0 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=de40f69854da6b073bdddb6acabf5e86 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.EBX 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.EBX 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.EBX 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=428f329b697ecd5357e28def0d60fdf85ede66835487f77fdc9686988a33d5a2 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Crv 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 428f329b697ecd5357e28def0d60fdf85ede66835487f77fdc9686988a33d5a2 3 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 428f329b697ecd5357e28def0d60fdf85ede66835487f77fdc9686988a33d5a2 3 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=428f329b697ecd5357e28def0d60fdf85ede66835487f77fdc9686988a33d5a2 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Crv 00:31:52.113 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Crv 00:31:52.114 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Crv 00:31:52.114 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:52.114 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2401908 00:31:52.114 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2401908 ']' 00:31:52.114 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.114 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:52.114 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.114 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:52.114 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PRf 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.2m5 ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2m5 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.qjF 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.nad ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nad 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.0NK 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.bdy ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bdy 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.kqK 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.EBX ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.EBX 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Crv 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.680 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:52.681 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:52.681 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:52.681 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:52.681 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:52.681 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:52.681 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:31:52.681 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:52.681 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:52.681 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:52.681 02:07:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:53.614 Waiting for block devices as requested 00:31:53.873 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:53.873 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:54.132 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:54.132 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:54.132 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:54.132 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:54.392 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:54.392 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:54.392 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:54.392 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:54.650 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:54.650 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:54.650 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:54.909 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:54.909 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:54.909 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:54.909 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:55.476 No valid GPT data, bailing 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:55.476 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:55.476 00:31:55.476 Discovery Log Number of Records 2, Generation counter 2 00:31:55.476 =====Discovery Log Entry 0====== 00:31:55.476 trtype: tcp 00:31:55.476 adrfam: ipv4 00:31:55.476 subtype: current discovery subsystem 00:31:55.476 treq: not specified, sq flow control disable supported 00:31:55.476 portid: 1 00:31:55.477 trsvcid: 4420 00:31:55.477 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:55.477 traddr: 10.0.0.1 00:31:55.477 eflags: none 00:31:55.477 sectype: none 00:31:55.477 =====Discovery Log Entry 1====== 00:31:55.477 trtype: tcp 00:31:55.477 adrfam: ipv4 00:31:55.477 subtype: nvme subsystem 00:31:55.477 treq: not specified, sq flow control disable supported 00:31:55.477 portid: 1 00:31:55.477 trsvcid: 4420 00:31:55.477 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:55.477 traddr: 10.0.0.1 00:31:55.477 eflags: none 00:31:55.477 sectype: none 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.477 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.735 nvme0n1 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: ]] 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.735 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.736 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.736 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.736 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.736 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.736 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.736 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.736 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.736 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.736 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.736 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:55.736 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.736 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.994 nvme0n1 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.994 02:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.254 nvme0n1 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: ]] 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.254 nvme0n1 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.254 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.255 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.255 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: ]] 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.512 nvme0n1 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.512 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.771 nvme0n1 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: ]] 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:56.771 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.772 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:56.772 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:56.772 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:56.772 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.772 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:56.772 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.772 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.031 nvme0n1 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.031 02:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.031 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.031 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.031 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.031 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.289 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.289 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.290 nvme0n1 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.290 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: ]] 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.550 nvme0n1 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: ]] 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.550 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.810 nvme0n1 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.810 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.069 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:58.069 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:58.069 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:31:58.069 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:58.069 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.069 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:58.069 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:31:58.069 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:58.069 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:58.069 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.069 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:58.069 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:58.069 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:58.069 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.070 02:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.070 nvme0n1 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: ]] 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.070 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.330 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.330 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.330 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.330 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.330 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.330 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.330 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.330 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.330 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.330 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.330 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.330 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.330 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:58.330 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.330 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.590 nvme0n1 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.590 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.591 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.591 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.591 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.591 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.591 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.591 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.591 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.591 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.591 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:58.591 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.591 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.851 nvme0n1 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: ]] 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.851 02:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.119 nvme0n1 00:31:59.120 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.120 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.120 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.120 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.120 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.120 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: ]] 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.420 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.678 nvme0n1 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:59.678 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.679 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.937 nvme0n1 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: ]] 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.937 02:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.505 nvme0n1 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:00.505 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.506 02:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.074 nvme0n1 00:32:01.074 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.074 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.074 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.074 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.074 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.074 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.074 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.074 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.074 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.074 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: ]] 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.333 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.334 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.334 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.334 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.334 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.334 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.334 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:01.334 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.334 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.900 nvme0n1 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: ]] 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:01.900 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.901 02:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.469 nvme0n1 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.039 nvme0n1 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: ]] 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.039 02:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.976 nvme0n1 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.976 02:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.910 nvme0n1 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: ]] 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.910 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.911 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.911 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.911 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.911 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.911 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:04.911 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.911 02:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.850 nvme0n1 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: ]] 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.851 02:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.790 nvme0n1 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.790 02:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.730 nvme0n1 00:32:07.730 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.730 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.730 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.730 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.730 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.730 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.730 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.730 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.730 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.730 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.730 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.730 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:07.730 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:07.730 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: ]] 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.731 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.989 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.989 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.989 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.989 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.989 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.989 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.989 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.989 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.989 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.989 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.989 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.990 nvme0n1 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.990 02:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.249 nvme0n1 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:08.249 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: ]] 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.250 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.510 nvme0n1 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:08.510 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: ]] 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.511 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.771 nvme0n1 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.771 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.032 nvme0n1 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: ]] 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.032 02:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.292 nvme0n1 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.292 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.293 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.293 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.293 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.293 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.293 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.293 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.293 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.293 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.293 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.293 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.293 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.293 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.293 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:09.293 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.293 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.552 nvme0n1 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: ]] 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.552 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.812 nvme0n1 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: ]] 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.812 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.071 nvme0n1 00:32:10.071 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.071 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.071 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.071 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.071 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.071 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.071 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.071 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.071 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.071 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.071 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.072 02:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.330 nvme0n1 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: ]] 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:10.330 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.331 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.589 nvme0n1 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:10.589 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.590 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.158 nvme0n1 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: ]] 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.158 02:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.419 nvme0n1 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: ]] 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.419 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.420 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:11.420 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.420 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.678 nvme0n1 00:32:11.678 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.678 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.678 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.678 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.679 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.938 nvme0n1 00:32:11.938 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.938 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.198 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.198 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.198 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.198 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.198 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.198 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.198 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.198 02:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.198 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.198 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:12.198 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.198 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:12.198 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.198 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:12.198 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:12.198 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:12.198 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:12.198 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:12.198 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: ]] 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.199 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.769 nvme0n1 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.769 02:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.339 nvme0n1 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: ]] 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.339 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.908 nvme0n1 00:32:13.908 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.908 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.908 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.908 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.908 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.908 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.908 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.908 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.908 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: ]] 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.909 02:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.476 nvme0n1 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.477 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.044 nvme0n1 00:32:15.044 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.044 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.044 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.044 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.044 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.044 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.044 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.044 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.044 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.044 02:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: ]] 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.044 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.018 nvme0n1 00:32:16.018 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.018 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.018 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.018 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.018 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.018 02:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.018 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.018 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.018 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.018 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.277 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.218 nvme0n1 00:32:17.218 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.218 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.218 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.218 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.218 02:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: ]] 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.218 02:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.158 nvme0n1 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: ]] 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:18.158 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.159 02:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.096 nvme0n1 00:32:19.096 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.096 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.096 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.096 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.096 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.096 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.096 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.096 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.096 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.096 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.354 02:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.292 nvme0n1 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: ]] 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.292 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.293 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.293 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.293 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.293 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.293 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.293 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.293 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:20.293 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.293 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.551 nvme0n1 00:32:20.551 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.551 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.551 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.551 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.551 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.551 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.551 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.551 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.551 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.551 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.551 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.551 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.551 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:20.551 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.551 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.552 nvme0n1 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.552 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: ]] 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.810 nvme0n1 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.810 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.068 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.068 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.068 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.068 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.068 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.068 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.068 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:21.068 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.068 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.068 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.068 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:21.068 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:21.068 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:21.068 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: ]] 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.069 02:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.069 nvme0n1 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.069 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.327 nvme0n1 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: ]] 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.327 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.584 nvme0n1 00:32:21.584 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.584 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.584 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.584 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.584 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.584 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.584 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.584 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.584 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.585 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.843 nvme0n1 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: ]] 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.843 02:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.101 nvme0n1 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:22.101 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: ]] 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.102 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.361 nvme0n1 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.361 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.362 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.362 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.362 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.362 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.362 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.362 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.362 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.362 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:22.362 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.362 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.622 nvme0n1 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: ]] 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.622 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.881 nvme0n1 00:32:22.881 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.881 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.881 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.881 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.881 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.140 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.141 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.141 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.141 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.141 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.141 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.141 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.141 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.141 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.141 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:23.141 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.141 02:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.401 nvme0n1 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: ]] 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.401 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.660 nvme0n1 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: ]] 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.661 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.238 nvme0n1 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.239 02:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.239 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.239 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.239 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.239 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.239 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.239 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.239 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.239 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.239 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.239 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.239 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.239 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.239 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:24.239 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.239 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.503 nvme0n1 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: ]] 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.503 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.070 nvme0n1 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.070 02:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.634 nvme0n1 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:25.634 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: ]] 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.635 02:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.204 nvme0n1 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: ]] 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:26.204 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.205 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:26.205 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.205 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.465 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.465 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.465 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.465 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.465 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.465 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.465 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.465 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.465 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.465 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.465 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.465 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.465 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:26.465 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.465 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.035 nvme0n1 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.035 02:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.605 nvme0n1 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM1MGI3MWI1YWE3MjY4OTg4NjE1M2Q2ZjdhMzAzNjIQkxsz: 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: ]] 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQ2NTI5NGMxOWMyMGVhNzI4ZmE2MWZlYjMzYTBmNDBkMmRhMmQxOGVjYWYzODM0ODMzMTQ0Zjc3ZDEwYWRmYnFnlH8=: 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.605 02:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.545 nvme0n1 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.545 02:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.479 nvme0n1 00:32:29.479 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.479 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.479 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.479 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.479 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.479 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.479 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.479 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.479 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.479 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.479 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzgxYTM0OGY3NzYwNjk1YzlmNDA2ZTIwM2Q0ZDBkOGQgOjTX: 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: ]] 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWMxMTc0NzA0OTc5YjgwNzE4NTlhZGYzZTAwZGIyZDNmjapH: 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.480 02:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.855 nvme0n1 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNiMDA4NmIwM2Q2MTY4ZmU5NjcyYTQ2NmZkM2NkODJkOWQ3NDUwNDFjMWQ3OTQ1m5lhmg==: 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: ]] 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0MGY2OTg1NGRhNmIwNzNiZGRkYjZhY2FiZjVlODbH0ALv: 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.855 02:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.793 nvme0n1 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDI4ZjMyOWI2OTdlY2Q1MzU3ZTI4ZGVmMGQ2MGZkZjg1ZWRlNjY4MzU0ODdmNzdmZGM5Njg2OTg4YTMzZDVhMpSfK24=: 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.793 02:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.758 nvme0n1 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjU2N2Y2NDE4MWI2YTU1YmZlY2YzOTg0MjY0N2RhNGUzZjQ1NGI3OTU4NmFhZTMzDr7++Q==: 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: ]] 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFiMDU4NDExZjUyMWEyYzJlNzJjNGFlYzA5MDQ0ZWEyNGZmZTRiN2YzZTY3N2Zjie205g==: 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.758 request: 00:32:32.758 { 00:32:32.758 "name": "nvme0", 00:32:32.758 "trtype": "tcp", 00:32:32.758 "traddr": "10.0.0.1", 00:32:32.758 "adrfam": "ipv4", 00:32:32.758 "trsvcid": "4420", 00:32:32.758 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:32.758 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:32.758 "prchk_reftag": false, 00:32:32.758 "prchk_guard": false, 00:32:32.758 "hdgst": false, 00:32:32.758 "ddgst": false, 00:32:32.758 "method": "bdev_nvme_attach_controller", 00:32:32.758 "req_id": 1 00:32:32.758 } 00:32:32.758 Got JSON-RPC error response 00:32:32.758 response: 00:32:32.758 { 00:32:32.758 "code": -5, 00:32:32.758 "message": "Input/output error" 00:32:32.758 } 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:32.758 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.759 request: 00:32:32.759 { 00:32:32.759 "name": "nvme0", 00:32:32.759 "trtype": "tcp", 00:32:32.759 "traddr": "10.0.0.1", 00:32:32.759 "adrfam": "ipv4", 00:32:32.759 "trsvcid": "4420", 00:32:32.759 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:32.759 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:32.759 "prchk_reftag": false, 00:32:32.759 "prchk_guard": false, 00:32:32.759 "hdgst": false, 00:32:32.759 "ddgst": false, 00:32:32.759 "dhchap_key": "key2", 00:32:32.759 "method": "bdev_nvme_attach_controller", 00:32:32.759 "req_id": 1 00:32:32.759 } 00:32:32.759 Got JSON-RPC error response 00:32:32.759 response: 00:32:32.759 { 00:32:32.759 "code": -5, 00:32:32.759 "message": "Input/output error" 00:32:32.759 } 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.759 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.019 request: 00:32:33.019 { 00:32:33.019 "name": "nvme0", 00:32:33.019 "trtype": "tcp", 00:32:33.019 "traddr": "10.0.0.1", 00:32:33.019 "adrfam": "ipv4", 00:32:33.019 "trsvcid": "4420", 00:32:33.019 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:33.019 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:33.019 "prchk_reftag": false, 00:32:33.019 "prchk_guard": false, 00:32:33.019 "hdgst": false, 00:32:33.019 "ddgst": false, 00:32:33.019 "dhchap_key": "key1", 00:32:33.019 "dhchap_ctrlr_key": "ckey2", 00:32:33.019 "method": "bdev_nvme_attach_controller", 00:32:33.019 "req_id": 1 00:32:33.019 } 00:32:33.019 Got JSON-RPC error response 00:32:33.019 response: 00:32:33.019 { 00:32:33.019 "code": -5, 00:32:33.019 "message": "Input/output error" 00:32:33.019 } 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:33.019 rmmod nvme_tcp 00:32:33.019 rmmod nvme_fabrics 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2401908 ']' 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2401908 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2401908 ']' 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2401908 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2401908 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2401908' 00:32:33.019 killing process with pid 2401908 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2401908 00:32:33.019 02:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2401908 00:32:33.278 02:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:33.278 02:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:33.278 02:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:33.278 02:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:33.278 02:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:33.278 02:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.278 02:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:33.278 02:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.182 02:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:35.182 02:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:35.182 02:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:35.182 02:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:35.182 02:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:35.182 02:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:35.182 02:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:35.182 02:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:35.182 02:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:35.182 02:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:35.182 02:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:35.182 02:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:35.442 02:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:36.379 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:36.638 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:36.638 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:36.638 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:36.638 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:36.638 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:36.638 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:36.638 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:36.638 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:36.638 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:36.638 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:36.638 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:36.638 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:36.638 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:36.638 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:36.638 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:37.575 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:37.575 02:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.PRf /tmp/spdk.key-null.qjF /tmp/spdk.key-sha256.0NK /tmp/spdk.key-sha384.kqK /tmp/spdk.key-sha512.Crv /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:37.575 02:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:38.951 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:38.951 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:38.951 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:38.951 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:38.951 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:38.951 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:38.951 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:38.951 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:38.951 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:38.951 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:38.951 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:38.951 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:38.951 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:38.951 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:38.951 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:38.951 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:38.951 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:39.209 00:32:39.209 real 0m49.947s 00:32:39.209 user 0m47.713s 00:32:39.209 sys 0m5.888s 00:32:39.209 02:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:39.209 02:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.209 ************************************ 00:32:39.209 END TEST nvmf_auth_host 00:32:39.209 ************************************ 00:32:39.209 02:08:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:39.209 02:08:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:39.209 02:08:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:39.209 02:08:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:39.209 02:08:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.209 ************************************ 00:32:39.209 START TEST nvmf_digest 00:32:39.209 ************************************ 00:32:39.209 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:39.209 * Looking for test storage... 00:32:39.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:39.209 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.209 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:39.209 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.209 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.209 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.209 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.209 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.209 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.209 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.209 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.209 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:32:39.210 02:08:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:41.736 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:41.737 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:41.737 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:41.737 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:41.737 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:41.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:41.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:32:41.737 00:32:41.737 --- 10.0.0.2 ping statistics --- 00:32:41.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.737 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:41.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:41.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:32:41.737 00:32:41.737 --- 10.0.0.1 ping statistics --- 00:32:41.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.737 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:41.737 ************************************ 00:32:41.737 START TEST nvmf_digest_clean 00:32:41.737 ************************************ 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2411354 00:32:41.737 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2411354 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2411354 ']' 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:41.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:41.738 [2024-07-26 02:08:23.352605] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:32:41.738 [2024-07-26 02:08:23.352705] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:41.738 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.738 [2024-07-26 02:08:23.419796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.738 [2024-07-26 02:08:23.506743] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:41.738 [2024-07-26 02:08:23.506805] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:41.738 [2024-07-26 02:08:23.506833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:41.738 [2024-07-26 02:08:23.506845] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:41.738 [2024-07-26 02:08:23.506855] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:41.738 [2024-07-26 02:08:23.506891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:41.738 null0 00:32:41.738 [2024-07-26 02:08:23.695206] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:41.738 [2024-07-26 02:08:23.719468] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2411376 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2411376 /var/tmp/bperf.sock 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2411376 ']' 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:41.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:41.738 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:41.994 [2024-07-26 02:08:23.770319] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:32:41.995 [2024-07-26 02:08:23.770393] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411376 ] 00:32:41.995 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.995 [2024-07-26 02:08:23.838582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.995 [2024-07-26 02:08:23.931199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.995 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:41.995 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:41.995 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:41.995 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:41.995 02:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:42.561 02:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:42.561 02:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:42.818 nvme0n1 00:32:43.075 02:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:43.075 02:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:43.075 Running I/O for 2 seconds... 00:32:44.976 00:32:44.976 Latency(us) 00:32:44.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.976 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:44.976 nvme0n1 : 2.00 17995.26 70.29 0.00 0.00 7105.08 3276.80 16019.91 00:32:44.976 =================================================================================================================== 00:32:44.976 Total : 17995.26 70.29 0.00 0.00 7105.08 3276.80 16019.91 00:32:44.976 0 00:32:44.976 02:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:44.976 02:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:44.976 02:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:44.976 02:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:44.976 02:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:44.976 | select(.opcode=="crc32c") 00:32:44.976 | "\(.module_name) \(.executed)"' 00:32:45.234 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:45.234 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:45.234 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:45.234 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:45.234 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2411376 00:32:45.234 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2411376 ']' 00:32:45.234 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2411376 00:32:45.234 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:45.234 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:45.234 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2411376 00:32:45.234 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:45.234 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:45.234 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2411376' 00:32:45.234 killing process with pid 2411376 00:32:45.234 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2411376 00:32:45.234 Received shutdown signal, test time was about 2.000000 seconds 00:32:45.234 00:32:45.234 Latency(us) 00:32:45.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.234 =================================================================================================================== 00:32:45.234 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:45.234 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2411376 00:32:45.492 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:45.492 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:45.492 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:45.492 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:45.492 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:45.492 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:45.492 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:45.492 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2411858 00:32:45.492 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2411858 /var/tmp/bperf.sock 00:32:45.492 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:45.492 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2411858 ']' 00:32:45.492 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:45.492 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:45.492 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:45.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:45.492 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:45.492 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:45.492 [2024-07-26 02:08:27.500136] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:32:45.492 [2024-07-26 02:08:27.500229] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411858 ] 00:32:45.492 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:45.492 Zero copy mechanism will not be used. 00:32:45.750 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.750 [2024-07-26 02:08:27.563570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.750 [2024-07-26 02:08:27.653543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.750 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:45.750 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:45.750 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:45.750 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:45.750 02:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:46.318 02:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:46.318 02:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:46.576 nvme0n1 00:32:46.576 02:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:46.576 02:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:46.834 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:46.834 Zero copy mechanism will not be used. 00:32:46.834 Running I/O for 2 seconds... 00:32:48.736 00:32:48.736 Latency(us) 00:32:48.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.736 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:48.736 nvme0n1 : 2.00 3986.40 498.30 0.00 0.00 4008.72 1055.86 10388.67 00:32:48.736 =================================================================================================================== 00:32:48.736 Total : 3986.40 498.30 0.00 0.00 4008.72 1055.86 10388.67 00:32:48.736 0 00:32:48.736 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:48.736 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:48.736 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:48.736 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:48.736 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:48.736 | select(.opcode=="crc32c") 00:32:48.736 | "\(.module_name) \(.executed)"' 00:32:48.993 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:48.993 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:48.993 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:48.993 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:48.993 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2411858 00:32:48.993 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2411858 ']' 00:32:48.993 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2411858 00:32:48.993 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:48.993 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:48.993 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2411858 00:32:48.993 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:48.993 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:48.993 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2411858' 00:32:48.993 killing process with pid 2411858 00:32:48.993 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2411858 00:32:48.993 Received shutdown signal, test time was about 2.000000 seconds 00:32:48.993 00:32:48.993 Latency(us) 00:32:48.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.994 =================================================================================================================== 00:32:48.994 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:48.994 02:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2411858 00:32:49.251 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:49.251 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:49.251 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:49.251 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:49.251 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:49.251 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:49.251 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:49.251 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2412307 00:32:49.251 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:49.251 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2412307 /var/tmp/bperf.sock 00:32:49.251 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2412307 ']' 00:32:49.251 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:49.251 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:49.251 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:49.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:49.251 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:49.251 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:49.251 [2024-07-26 02:08:31.159587] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:32:49.251 [2024-07-26 02:08:31.159661] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2412307 ] 00:32:49.251 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.251 [2024-07-26 02:08:31.217674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.510 [2024-07-26 02:08:31.305010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.510 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:49.510 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:49.510 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:49.510 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:49.510 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:49.768 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:49.768 02:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:50.337 nvme0n1 00:32:50.337 02:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:50.337 02:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:50.337 Running I/O for 2 seconds... 00:32:52.294 00:32:52.294 Latency(us) 00:32:52.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.294 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:52.294 nvme0n1 : 2.00 20370.87 79.57 0.00 0.00 6273.20 3373.89 12039.21 00:32:52.294 =================================================================================================================== 00:32:52.294 Total : 20370.87 79.57 0.00 0.00 6273.20 3373.89 12039.21 00:32:52.294 0 00:32:52.294 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:52.294 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:52.294 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:52.294 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:52.294 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:52.294 | select(.opcode=="crc32c") 00:32:52.294 | "\(.module_name) \(.executed)"' 00:32:52.553 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:52.553 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:52.553 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:52.553 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:52.553 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2412307 00:32:52.553 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2412307 ']' 00:32:52.553 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2412307 00:32:52.553 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:52.553 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:52.553 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2412307 00:32:52.553 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:52.553 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:52.553 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2412307' 00:32:52.553 killing process with pid 2412307 00:32:52.553 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2412307 00:32:52.553 Received shutdown signal, test time was about 2.000000 seconds 00:32:52.553 00:32:52.553 Latency(us) 00:32:52.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.553 =================================================================================================================== 00:32:52.553 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:52.553 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2412307 00:32:52.811 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:52.811 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:52.811 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:52.811 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:52.811 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:52.811 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:52.811 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:52.811 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2412717 00:32:52.811 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2412717 /var/tmp/bperf.sock 00:32:52.812 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:52.812 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2412717 ']' 00:32:52.812 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:52.812 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:52.812 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:52.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:52.812 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:52.812 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:52.812 [2024-07-26 02:08:34.779404] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:32:52.812 [2024-07-26 02:08:34.779479] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2412717 ] 00:32:52.812 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:52.812 Zero copy mechanism will not be used. 00:32:52.812 EAL: No free 2048 kB hugepages reported on node 1 00:32:53.069 [2024-07-26 02:08:34.842698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.069 [2024-07-26 02:08:34.932628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.069 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:53.069 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:53.069 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:53.069 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:53.069 02:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:53.326 02:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:53.326 02:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:53.894 nvme0n1 00:32:53.894 02:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:53.894 02:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:53.894 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:53.894 Zero copy mechanism will not be used. 00:32:53.894 Running I/O for 2 seconds... 00:32:55.801 00:32:55.801 Latency(us) 00:32:55.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.801 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:55.801 nvme0n1 : 2.00 3083.00 385.38 0.00 0.00 5177.21 3640.89 13010.11 00:32:55.801 =================================================================================================================== 00:32:55.801 Total : 3083.00 385.38 0.00 0.00 5177.21 3640.89 13010.11 00:32:55.801 0 00:32:56.060 02:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:56.060 02:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:56.060 02:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:56.060 02:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:56.060 | select(.opcode=="crc32c") 00:32:56.060 | "\(.module_name) \(.executed)"' 00:32:56.060 02:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:56.060 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:56.060 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:56.060 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:56.060 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:56.060 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2412717 00:32:56.060 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2412717 ']' 00:32:56.061 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2412717 00:32:56.061 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:56.061 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:56.061 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2412717 00:32:56.319 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:56.319 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:56.319 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2412717' 00:32:56.319 killing process with pid 2412717 00:32:56.319 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2412717 00:32:56.319 Received shutdown signal, test time was about 2.000000 seconds 00:32:56.319 00:32:56.319 Latency(us) 00:32:56.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.319 =================================================================================================================== 00:32:56.319 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:56.319 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2412717 00:32:56.319 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2411354 00:32:56.319 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2411354 ']' 00:32:56.319 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2411354 00:32:56.319 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:56.319 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:56.319 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2411354 00:32:56.578 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:56.578 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:56.578 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2411354' 00:32:56.578 killing process with pid 2411354 00:32:56.578 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2411354 00:32:56.578 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2411354 00:32:56.578 00:32:56.578 real 0m15.282s 00:32:56.578 user 0m30.189s 00:32:56.578 sys 0m4.227s 00:32:56.578 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:56.578 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:56.578 ************************************ 00:32:56.578 END TEST nvmf_digest_clean 00:32:56.578 ************************************ 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:56.837 ************************************ 00:32:56.837 START TEST nvmf_digest_error 00:32:56.837 ************************************ 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2413154 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2413154 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2413154 ']' 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:56.837 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:56.837 [2024-07-26 02:08:38.684481] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:32:56.837 [2024-07-26 02:08:38.684560] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:56.837 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.837 [2024-07-26 02:08:38.749944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.837 [2024-07-26 02:08:38.837005] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:56.837 [2024-07-26 02:08:38.837086] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:56.837 [2024-07-26 02:08:38.837102] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:56.837 [2024-07-26 02:08:38.837113] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:56.837 [2024-07-26 02:08:38.837123] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:56.837 [2024-07-26 02:08:38.837165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.095 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:57.095 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:57.095 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:57.095 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:57.095 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.095 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.095 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:57.095 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.095 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.095 [2024-07-26 02:08:38.917751] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:57.095 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.095 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:57.095 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:57.095 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.095 02:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.095 null0 00:32:57.095 [2024-07-26 02:08:39.036672] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.095 [2024-07-26 02:08:39.060912] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.095 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.095 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:57.096 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:57.096 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:57.096 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:57.096 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:57.096 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2413291 00:32:57.096 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:57.096 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2413291 /var/tmp/bperf.sock 00:32:57.096 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2413291 ']' 00:32:57.096 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:57.096 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:57.096 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:57.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:57.096 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:57.096 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.354 [2024-07-26 02:08:39.112666] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:32:57.354 [2024-07-26 02:08:39.112745] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413291 ] 00:32:57.354 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.354 [2024-07-26 02:08:39.179563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.354 [2024-07-26 02:08:39.266779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.612 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:57.612 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:57.612 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:57.612 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:57.612 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:57.612 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.612 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.871 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.871 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:57.871 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:58.129 nvme0n1 00:32:58.129 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:58.129 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.129 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:58.129 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.129 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:58.129 02:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:58.129 Running I/O for 2 seconds... 00:32:58.129 [2024-07-26 02:08:40.079153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.129 [2024-07-26 02:08:40.079219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.129 [2024-07-26 02:08:40.079240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.129 [2024-07-26 02:08:40.095478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.129 [2024-07-26 02:08:40.095514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.129 [2024-07-26 02:08:40.095534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.129 [2024-07-26 02:08:40.108201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.129 [2024-07-26 02:08:40.108248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.129 [2024-07-26 02:08:40.108265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.129 [2024-07-26 02:08:40.122648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.129 [2024-07-26 02:08:40.122684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.129 [2024-07-26 02:08:40.122710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.129 [2024-07-26 02:08:40.135494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.129 [2024-07-26 02:08:40.135529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.129 [2024-07-26 02:08:40.135548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.149581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.149617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.149635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.166463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.166499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.166518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.180277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.180322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.180339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.194029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.194070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.194110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.207514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.207548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.207567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.219166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.219210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.219225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.234765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.234800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.234818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.249118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.249148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.249179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.261485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.261520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.261538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.278723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.278758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.278777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.293942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.293977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.293995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.309675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.309710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.309728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.323131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.323161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.323197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.335611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.335646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.335665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.349567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.349602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.349621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.362230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.362259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.362301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.377929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.377964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.377982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.390 [2024-07-26 02:08:40.393578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.390 [2024-07-26 02:08:40.393619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.390 [2024-07-26 02:08:40.393638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.651 [2024-07-26 02:08:40.405538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.651 [2024-07-26 02:08:40.405574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.651 [2024-07-26 02:08:40.405592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.651 [2024-07-26 02:08:40.420549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.651 [2024-07-26 02:08:40.420583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.651 [2024-07-26 02:08:40.420602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.651 [2024-07-26 02:08:40.433972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.651 [2024-07-26 02:08:40.434008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.651 [2024-07-26 02:08:40.434027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.651 [2024-07-26 02:08:40.447201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.651 [2024-07-26 02:08:40.447233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.651 [2024-07-26 02:08:40.447250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.651 [2024-07-26 02:08:40.460435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.651 [2024-07-26 02:08:40.460472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.651 [2024-07-26 02:08:40.460492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.651 [2024-07-26 02:08:40.473727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.651 [2024-07-26 02:08:40.473761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.651 [2024-07-26 02:08:40.473780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.651 [2024-07-26 02:08:40.487353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.651 [2024-07-26 02:08:40.487407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.651 [2024-07-26 02:08:40.487427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.651 [2024-07-26 02:08:40.500053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.651 [2024-07-26 02:08:40.500095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.651 [2024-07-26 02:08:40.500129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.651 [2024-07-26 02:08:40.514941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.651 [2024-07-26 02:08:40.514981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.651 [2024-07-26 02:08:40.515001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.651 [2024-07-26 02:08:40.530790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.651 [2024-07-26 02:08:40.530824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.651 [2024-07-26 02:08:40.530843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.651 [2024-07-26 02:08:40.543318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.651 [2024-07-26 02:08:40.543374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.651 [2024-07-26 02:08:40.543394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.651 [2024-07-26 02:08:40.556930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.651 [2024-07-26 02:08:40.556964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.651 [2024-07-26 02:08:40.556983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.651 [2024-07-26 02:08:40.570188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.651 [2024-07-26 02:08:40.570217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.651 [2024-07-26 02:08:40.570250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.651 [2024-07-26 02:08:40.583533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.651 [2024-07-26 02:08:40.583567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.651 [2024-07-26 02:08:40.583586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.651 [2024-07-26 02:08:40.596814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.651 [2024-07-26 02:08:40.596849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.651 [2024-07-26 02:08:40.596867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.652 [2024-07-26 02:08:40.610619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.652 [2024-07-26 02:08:40.610654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.652 [2024-07-26 02:08:40.610673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.652 [2024-07-26 02:08:40.625963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.652 [2024-07-26 02:08:40.626021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.652 [2024-07-26 02:08:40.626041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.652 [2024-07-26 02:08:40.639383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.652 [2024-07-26 02:08:40.639418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.652 [2024-07-26 02:08:40.639436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.652 [2024-07-26 02:08:40.653854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.652 [2024-07-26 02:08:40.653887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.652 [2024-07-26 02:08:40.653907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.910 [2024-07-26 02:08:40.666712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.666747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.666766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.678815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.678849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.678868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.693961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.693996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.694016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.712241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.712269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.712300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.728169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.728197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.728236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.741257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.741285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.741315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.756688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.756722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.756741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.770277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.770306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.770337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.783294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.783323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.783354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.796549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.796583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.796602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.809396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.809430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.809448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.821759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.821793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.821811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.837817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.837850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.837869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.854038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.854086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.854111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.866738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.866772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.866791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.883956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.883991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.884011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.901217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.901255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.901274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.911 [2024-07-26 02:08:40.916006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:58.911 [2024-07-26 02:08:40.916041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.911 [2024-07-26 02:08:40.916069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.170 [2024-07-26 02:08:40.928104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.170 [2024-07-26 02:08:40.928148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.170 [2024-07-26 02:08:40.928164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.170 [2024-07-26 02:08:40.946329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.170 [2024-07-26 02:08:40.946358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.170 [2024-07-26 02:08:40.946374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.170 [2024-07-26 02:08:40.962702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.170 [2024-07-26 02:08:40.962737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.170 [2024-07-26 02:08:40.962755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.170 [2024-07-26 02:08:40.975724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.170 [2024-07-26 02:08:40.975758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.170 [2024-07-26 02:08:40.975777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.170 [2024-07-26 02:08:40.993051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.170 [2024-07-26 02:08:40.993095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.170 [2024-07-26 02:08:40.993114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.170 [2024-07-26 02:08:41.008003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.170 [2024-07-26 02:08:41.008034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.170 [2024-07-26 02:08:41.008051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.170 [2024-07-26 02:08:41.020958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.170 [2024-07-26 02:08:41.020993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.170 [2024-07-26 02:08:41.021012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.170 [2024-07-26 02:08:41.036791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.170 [2024-07-26 02:08:41.036825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.170 [2024-07-26 02:08:41.036844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.170 [2024-07-26 02:08:41.049984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.171 [2024-07-26 02:08:41.050018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.171 [2024-07-26 02:08:41.050037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.171 [2024-07-26 02:08:41.062385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.171 [2024-07-26 02:08:41.062437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.171 [2024-07-26 02:08:41.062456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.171 [2024-07-26 02:08:41.077308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.171 [2024-07-26 02:08:41.077338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.171 [2024-07-26 02:08:41.077354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.171 [2024-07-26 02:08:41.089504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.171 [2024-07-26 02:08:41.089540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.171 [2024-07-26 02:08:41.089558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.171 [2024-07-26 02:08:41.105793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.171 [2024-07-26 02:08:41.105829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.171 [2024-07-26 02:08:41.105858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.171 [2024-07-26 02:08:41.123423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.171 [2024-07-26 02:08:41.123452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.171 [2024-07-26 02:08:41.123468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.171 [2024-07-26 02:08:41.135377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.171 [2024-07-26 02:08:41.135434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.171 [2024-07-26 02:08:41.135452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.171 [2024-07-26 02:08:41.151277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.171 [2024-07-26 02:08:41.151308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.171 [2024-07-26 02:08:41.151340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.171 [2024-07-26 02:08:41.163196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.171 [2024-07-26 02:08:41.163225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.171 [2024-07-26 02:08:41.163255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.171 [2024-07-26 02:08:41.180021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.171 [2024-07-26 02:08:41.180067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.171 [2024-07-26 02:08:41.180088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.197796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.197833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.197852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.213938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.213974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.213993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.227236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.227267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.227285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.240975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.241052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.241084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.255016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.255053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.255081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.268288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.268319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.268339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.282484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.282520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.282539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.294589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.294619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.294650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.306663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.306692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.306723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.319594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.319624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.319656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.331924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.331973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.331992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.345416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.345446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.345478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.357000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.357030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.357072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.371859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.371895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.371913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.383446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.383477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.383508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.397236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.397283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.397300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.409410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.409438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.409468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.424099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.424130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.424162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.431 [2024-07-26 02:08:41.438477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.431 [2024-07-26 02:08:41.438506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.431 [2024-07-26 02:08:41.438538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.450189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.450219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.450251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.463684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.463735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.463768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.478289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.478320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.478337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.488835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.488862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.488892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.503225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.503255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.503287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.517815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.517846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.517863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.528954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.528985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.529017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.543128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.543161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.543194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.554281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.554309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.554341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.569178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.569207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.569240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.583073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.583103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.583136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.596647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.596676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.596708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.610864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.610892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.610922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.623762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.623794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.623811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.634216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.634246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.634278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.647726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.647754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.647785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.660765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.660796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.660813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.671943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.692 [2024-07-26 02:08:41.671972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.692 [2024-07-26 02:08:41.672004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.692 [2024-07-26 02:08:41.688405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.693 [2024-07-26 02:08:41.688456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.693 [2024-07-26 02:08:41.688486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.953 [2024-07-26 02:08:41.702952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.953 [2024-07-26 02:08:41.703005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.953 [2024-07-26 02:08:41.703024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.953 [2024-07-26 02:08:41.714287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.953 [2024-07-26 02:08:41.714315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.953 [2024-07-26 02:08:41.714345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.953 [2024-07-26 02:08:41.728535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.953 [2024-07-26 02:08:41.728565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.953 [2024-07-26 02:08:41.728603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.953 [2024-07-26 02:08:41.740281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.953 [2024-07-26 02:08:41.740309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.953 [2024-07-26 02:08:41.740340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.953 [2024-07-26 02:08:41.754325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.953 [2024-07-26 02:08:41.754352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.953 [2024-07-26 02:08:41.754383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.953 [2024-07-26 02:08:41.766283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.953 [2024-07-26 02:08:41.766313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.953 [2024-07-26 02:08:41.766345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.953 [2024-07-26 02:08:41.780719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.954 [2024-07-26 02:08:41.780749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.954 [2024-07-26 02:08:41.780781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.954 [2024-07-26 02:08:41.790983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.954 [2024-07-26 02:08:41.791012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.954 [2024-07-26 02:08:41.791045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.954 [2024-07-26 02:08:41.803698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.954 [2024-07-26 02:08:41.803735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.954 [2024-07-26 02:08:41.803767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.954 [2024-07-26 02:08:41.816767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.954 [2024-07-26 02:08:41.816796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.954 [2024-07-26 02:08:41.816828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.954 [2024-07-26 02:08:41.829323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.954 [2024-07-26 02:08:41.829369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.954 [2024-07-26 02:08:41.829385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.954 [2024-07-26 02:08:41.841703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.954 [2024-07-26 02:08:41.841731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.954 [2024-07-26 02:08:41.841762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.954 [2024-07-26 02:08:41.852996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.954 [2024-07-26 02:08:41.853024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.954 [2024-07-26 02:08:41.853055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.954 [2024-07-26 02:08:41.867925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.954 [2024-07-26 02:08:41.867953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.954 [2024-07-26 02:08:41.867984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.954 [2024-07-26 02:08:41.879713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.954 [2024-07-26 02:08:41.879743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.954 [2024-07-26 02:08:41.879773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.954 [2024-07-26 02:08:41.891670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.954 [2024-07-26 02:08:41.891699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.954 [2024-07-26 02:08:41.891730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.954 [2024-07-26 02:08:41.905265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.954 [2024-07-26 02:08:41.905295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.954 [2024-07-26 02:08:41.905313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.954 [2024-07-26 02:08:41.918462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.954 [2024-07-26 02:08:41.918493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.954 [2024-07-26 02:08:41.918509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.954 [2024-07-26 02:08:41.929665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.954 [2024-07-26 02:08:41.929694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.954 [2024-07-26 02:08:41.929725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.954 [2024-07-26 02:08:41.943073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.954 [2024-07-26 02:08:41.943102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.954 [2024-07-26 02:08:41.943134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.954 [2024-07-26 02:08:41.955120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:32:59.954 [2024-07-26 02:08:41.955148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.954 [2024-07-26 02:08:41.955178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.213 [2024-07-26 02:08:41.966724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:33:00.213 [2024-07-26 02:08:41.966770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.213 [2024-07-26 02:08:41.966786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.213 [2024-07-26 02:08:41.982706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:33:00.213 [2024-07-26 02:08:41.982736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.213 [2024-07-26 02:08:41.982769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.213 [2024-07-26 02:08:41.997604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:33:00.213 [2024-07-26 02:08:41.997647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.213 [2024-07-26 02:08:41.997664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.213 [2024-07-26 02:08:42.009056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:33:00.213 [2024-07-26 02:08:42.009093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.213 [2024-07-26 02:08:42.009111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.213 [2024-07-26 02:08:42.024070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:33:00.213 [2024-07-26 02:08:42.024100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.213 [2024-07-26 02:08:42.024127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.213 [2024-07-26 02:08:42.037807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:33:00.213 [2024-07-26 02:08:42.037836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.213 [2024-07-26 02:08:42.037866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.213 [2024-07-26 02:08:42.050853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:33:00.213 [2024-07-26 02:08:42.050883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.213 [2024-07-26 02:08:42.050914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.213 [2024-07-26 02:08:42.063268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9cb720) 00:33:00.213 [2024-07-26 02:08:42.063300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.213 [2024-07-26 02:08:42.063318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.213 00:33:00.213 Latency(us) 00:33:00.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.213 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:00.213 nvme0n1 : 2.00 18511.86 72.31 0.00 0.00 6905.62 3713.71 25437.68 00:33:00.213 =================================================================================================================== 00:33:00.213 Total : 18511.86 72.31 0.00 0.00 6905.62 3713.71 25437.68 00:33:00.213 0 00:33:00.213 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:00.213 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:00.213 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:00.213 | .driver_specific 00:33:00.213 | .nvme_error 00:33:00.213 | .status_code 00:33:00.213 | .command_transient_transport_error' 00:33:00.213 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:00.472 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:33:00.472 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2413291 00:33:00.472 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2413291 ']' 00:33:00.472 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2413291 00:33:00.472 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:00.472 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:00.472 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2413291 00:33:00.472 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:00.472 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:00.472 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2413291' 00:33:00.472 killing process with pid 2413291 00:33:00.472 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2413291 00:33:00.472 Received shutdown signal, test time was about 2.000000 seconds 00:33:00.472 00:33:00.472 Latency(us) 00:33:00.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.472 =================================================================================================================== 00:33:00.472 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:00.472 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2413291 00:33:00.731 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:00.731 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:00.731 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:00.731 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:00.731 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:00.731 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2413702 00:33:00.731 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:00.731 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2413702 /var/tmp/bperf.sock 00:33:00.731 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2413702 ']' 00:33:00.731 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:00.731 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:00.731 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:00.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:00.731 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:00.731 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:00.731 [2024-07-26 02:08:42.644980] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:33:00.731 [2024-07-26 02:08:42.645055] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413702 ] 00:33:00.731 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:00.731 Zero copy mechanism will not be used. 00:33:00.731 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.731 [2024-07-26 02:08:42.707455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.989 [2024-07-26 02:08:42.795383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.989 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:00.989 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:00.989 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:00.989 02:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:01.247 02:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:01.247 02:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.247 02:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:01.247 02:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.247 02:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:01.247 02:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:01.822 nvme0n1 00:33:01.822 02:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:01.822 02:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.822 02:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:01.822 02:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.822 02:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:01.822 02:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:01.822 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:01.822 Zero copy mechanism will not be used. 00:33:01.822 Running I/O for 2 seconds... 00:33:01.822 [2024-07-26 02:08:43.798981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:01.822 [2024-07-26 02:08:43.799067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.822 [2024-07-26 02:08:43.799091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.822 [2024-07-26 02:08:43.806620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:01.822 [2024-07-26 02:08:43.806668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.822 [2024-07-26 02:08:43.806686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.822 [2024-07-26 02:08:43.814174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:01.822 [2024-07-26 02:08:43.814206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.822 [2024-07-26 02:08:43.814224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.822 [2024-07-26 02:08:43.821974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:01.822 [2024-07-26 02:08:43.822020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.822 [2024-07-26 02:08:43.822037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.822 [2024-07-26 02:08:43.829369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:01.822 [2024-07-26 02:08:43.829398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.822 [2024-07-26 02:08:43.829430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.082 [2024-07-26 02:08:43.836823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.082 [2024-07-26 02:08:43.836855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.082 [2024-07-26 02:08:43.836872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.082 [2024-07-26 02:08:43.844191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.082 [2024-07-26 02:08:43.844236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.082 [2024-07-26 02:08:43.844253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.082 [2024-07-26 02:08:43.851447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.082 [2024-07-26 02:08:43.851491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.082 [2024-07-26 02:08:43.851507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.082 [2024-07-26 02:08:43.858735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.082 [2024-07-26 02:08:43.858766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.082 [2024-07-26 02:08:43.858782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.082 [2024-07-26 02:08:43.866036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.082 [2024-07-26 02:08:43.866073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.082 [2024-07-26 02:08:43.866091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.082 [2024-07-26 02:08:43.873387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.082 [2024-07-26 02:08:43.873438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.082 [2024-07-26 02:08:43.873454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.082 [2024-07-26 02:08:43.880764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.880794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.880810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:43.888124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.888152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.888169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:43.895342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.895386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.895410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:43.902753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.902797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.902813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:43.910385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.910414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.910447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:43.918091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.918121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.918137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:43.925401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.925430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.925463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:43.932791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.932835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.932851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:43.940241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.940270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.940286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:43.947520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.947549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.947566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:43.954781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.954825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.954840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:43.961949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.961986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.962003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:43.969244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.969275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.969291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:43.976400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.976429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.976462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:43.983705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.983749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.983764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:43.991275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.991318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.991335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:43.998648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:43.998693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:43.998709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:44.005979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:44.006009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:44.006040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:44.013220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:44.013251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:44.013267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:44.020391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:44.020420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:44.020460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:44.027629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:44.027660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:44.027676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:44.034685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:44.034715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:44.034746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:44.042005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:44.042034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:44.042075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:44.049152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:44.049182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:44.049199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:44.056274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:44.056306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:44.056323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:44.063520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:44.063550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:44.063567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:44.070709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:44.070739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:44.070755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:44.077860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:44.077905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:44.077920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:44.085127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:44.085177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:44.085208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.083 [2024-07-26 02:08:44.092267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.083 [2024-07-26 02:08:44.092297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.083 [2024-07-26 02:08:44.092313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.099522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.099553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.099569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.106647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.106690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.106707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.114081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.114112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.114134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.121411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.121442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.121474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.128611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.128655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.128672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.135887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.135917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.135934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.143122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.143152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.143169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.150211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.150242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.150258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.157702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.157733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.157765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.165094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.165124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.165140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.172563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.172609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.172625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.179852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.179882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.179899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.187266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.187297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.187314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.194663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.194692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.194723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.202021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.202050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.202090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.209123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.209152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.209176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.216440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.216484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.216501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.223776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.223820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.223837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.231081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.231111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.231127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.238558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.238588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.238620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.245862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.245892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.245909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.253081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.253110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.253127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.260157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.260186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.260203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.267420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.267451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.267467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.274899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.342 [2024-07-26 02:08:44.274935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-26 02:08:44.274968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.342 [2024-07-26 02:08:44.282297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.343 [2024-07-26 02:08:44.282327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-26 02:08:44.282344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.343 [2024-07-26 02:08:44.289389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.343 [2024-07-26 02:08:44.289432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-26 02:08:44.289447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.343 [2024-07-26 02:08:44.296631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.343 [2024-07-26 02:08:44.296675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-26 02:08:44.296690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.343 [2024-07-26 02:08:44.303905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.343 [2024-07-26 02:08:44.303949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-26 02:08:44.303965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.343 [2024-07-26 02:08:44.311161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.343 [2024-07-26 02:08:44.311193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-26 02:08:44.311211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.343 [2024-07-26 02:08:44.318500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.343 [2024-07-26 02:08:44.318530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-26 02:08:44.318563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.343 [2024-07-26 02:08:44.325740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.343 [2024-07-26 02:08:44.325770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-26 02:08:44.325803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.343 [2024-07-26 02:08:44.332923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.343 [2024-07-26 02:08:44.332953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-26 02:08:44.332992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.343 [2024-07-26 02:08:44.340248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.343 [2024-07-26 02:08:44.340279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-26 02:08:44.340295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.343 [2024-07-26 02:08:44.347442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.343 [2024-07-26 02:08:44.347470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-26 02:08:44.347502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.602 [2024-07-26 02:08:44.354578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.602 [2024-07-26 02:08:44.354623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-26 02:08:44.354638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.602 [2024-07-26 02:08:44.361874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.602 [2024-07-26 02:08:44.361904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-26 02:08:44.361921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.602 [2024-07-26 02:08:44.368999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.602 [2024-07-26 02:08:44.369028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-26 02:08:44.369045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.602 [2024-07-26 02:08:44.376093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.602 [2024-07-26 02:08:44.376133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-26 02:08:44.376149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.602 [2024-07-26 02:08:44.383330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.602 [2024-07-26 02:08:44.383359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-26 02:08:44.383377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.602 [2024-07-26 02:08:44.390554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.602 [2024-07-26 02:08:44.390583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-26 02:08:44.390616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.602 [2024-07-26 02:08:44.398190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.602 [2024-07-26 02:08:44.398233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-26 02:08:44.398251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.602 [2024-07-26 02:08:44.407495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.602 [2024-07-26 02:08:44.407541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-26 02:08:44.407559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.602 [2024-07-26 02:08:44.417110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.602 [2024-07-26 02:08:44.417142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-26 02:08:44.417159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.602 [2024-07-26 02:08:44.426535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.602 [2024-07-26 02:08:44.426567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-26 02:08:44.426585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.602 [2024-07-26 02:08:44.435927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.602 [2024-07-26 02:08:44.435972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.435989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.445514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.445545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.445578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.454915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.454946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.454979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.464211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.464244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.464262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.473320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.473367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.473383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.482507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.482553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.482571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.491997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.492043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.492070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.501528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.501560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.501577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.510756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.510789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.510805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.519610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.519654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.519672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.528632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.528663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.528680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.537291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.537325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.537343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.544211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.544241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.544272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.551532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.551563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.551592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.558628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.558673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.558690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.565785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.565831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.565849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.573018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.573047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.573073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.580267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.580297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.580313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.587550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.587586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.587618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.594781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.594811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.594827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.601908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.601938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.601954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.603 [2024-07-26 02:08:44.609281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.603 [2024-07-26 02:08:44.609309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-26 02:08:44.609348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.616544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.616573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.616605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.623791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.623821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.623837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.630794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.630824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.630840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.638090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.638120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.638137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.645098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.645127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.645144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.652145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.652176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.652193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.659777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.659811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.659829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.667513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.667546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.667564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.675388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.675421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.675448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.683503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.683536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.683554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.691317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.691363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.691381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.699249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.699277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.699308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.707125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.707171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.707190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.716280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.716311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.716328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.724996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.725031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.725050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.734683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.734718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.734737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.743873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.743908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.743926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.753903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.753944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.753965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.762532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.762567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.762585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.772295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.772344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.772361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.782536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.782570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.782589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.792827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.792862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.792881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.802801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.802835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.802854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.812995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.813036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.813056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.822147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.822179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.822197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.831642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.831676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.831695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.840680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.840715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.840734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.849904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.863 [2024-07-26 02:08:44.849939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-26 02:08:44.849957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.863 [2024-07-26 02:08:44.858608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.864 [2024-07-26 02:08:44.858642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.864 [2024-07-26 02:08:44.858662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.864 [2024-07-26 02:08:44.867044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:02.864 [2024-07-26 02:08:44.867100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.864 [2024-07-26 02:08:44.867118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:44.875546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:44.875581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:44.875600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:44.884298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:44.884328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:44.884345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:44.893029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:44.893073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:44.893095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:44.901628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:44.901662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:44.901681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:44.910105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:44.910136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:44.910160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:44.918696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:44.918730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:44.918748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:44.927580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:44.927614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:44.927633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:44.936218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:44.936263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:44.936279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:44.944628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:44.944662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:44.944680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:44.952643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:44.952678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:44.952702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:44.961079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:44.961126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:44.961143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:44.969202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:44.969232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:44.969249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:44.977026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:44.977066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:44.977086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:44.984868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:44.984905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:44.984925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:44.992676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:44.992708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:44.992726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:45.000691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:45.000724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:45.000743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:45.008760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:45.008793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:45.008811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:45.016852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:45.016890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:45.016909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:45.025610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:45.025645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:45.025663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:45.033981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:45.034014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:45.034033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:45.042812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:45.042846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:45.042866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:45.051209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:45.051240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:45.051257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:45.059659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:45.059693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:45.059712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:45.069340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:45.069384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:45.069400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:45.078004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:45.078040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:45.078065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:45.086728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:45.086762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:45.086781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:45.095891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:45.095925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:45.095944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:45.105292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:45.105324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:45.105341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:45.114149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:45.114179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:45.114196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:45.122846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:45.122884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:45.122903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.124 [2024-07-26 02:08:45.130804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.124 [2024-07-26 02:08:45.130838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.124 [2024-07-26 02:08:45.130862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.138631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.138666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.138684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.146656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.146688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.146706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.154623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.154655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.154673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.162471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.162503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.162521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.170422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.170454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.170471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.178284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.178313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.178330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.186241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.186270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.186286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.194234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.194263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.194279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.202067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.202100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.202133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.209985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.210017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.210035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.217665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.217698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.217715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.225428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.225460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.225478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.233244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.233272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.233288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.241384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.241417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.241435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.249352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.249398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.249417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.257188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.257216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.257233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.265049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.265088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.265126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.273121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.273150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.273167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.281350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.281397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.384 [2024-07-26 02:08:45.281416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.384 [2024-07-26 02:08:45.289258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.384 [2024-07-26 02:08:45.289288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.385 [2024-07-26 02:08:45.289304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.385 [2024-07-26 02:08:45.297176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.385 [2024-07-26 02:08:45.297205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.385 [2024-07-26 02:08:45.297222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.385 [2024-07-26 02:08:45.305067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.385 [2024-07-26 02:08:45.305114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.385 [2024-07-26 02:08:45.305130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.385 [2024-07-26 02:08:45.313140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.385 [2024-07-26 02:08:45.313171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.385 [2024-07-26 02:08:45.313188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.385 [2024-07-26 02:08:45.321073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.385 [2024-07-26 02:08:45.321118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.385 [2024-07-26 02:08:45.321135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.385 [2024-07-26 02:08:45.328972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.385 [2024-07-26 02:08:45.329006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.385 [2024-07-26 02:08:45.329025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.385 [2024-07-26 02:08:45.336837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.385 [2024-07-26 02:08:45.336876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.385 [2024-07-26 02:08:45.336895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.385 [2024-07-26 02:08:45.344733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.385 [2024-07-26 02:08:45.344765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.385 [2024-07-26 02:08:45.344783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.385 [2024-07-26 02:08:45.352510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.385 [2024-07-26 02:08:45.352543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.385 [2024-07-26 02:08:45.352561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.385 [2024-07-26 02:08:45.360336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.385 [2024-07-26 02:08:45.360383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.385 [2024-07-26 02:08:45.360401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.385 [2024-07-26 02:08:45.368254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.385 [2024-07-26 02:08:45.368283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.385 [2024-07-26 02:08:45.368298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.385 [2024-07-26 02:08:45.376313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.385 [2024-07-26 02:08:45.376342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.385 [2024-07-26 02:08:45.376358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.385 [2024-07-26 02:08:45.384128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.385 [2024-07-26 02:08:45.384157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.385 [2024-07-26 02:08:45.384174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.385 [2024-07-26 02:08:45.392029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.385 [2024-07-26 02:08:45.392068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.385 [2024-07-26 02:08:45.392088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.643 [2024-07-26 02:08:45.400014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.643 [2024-07-26 02:08:45.400048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.643 [2024-07-26 02:08:45.400074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.643 [2024-07-26 02:08:45.407917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.643 [2024-07-26 02:08:45.407949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.643 [2024-07-26 02:08:45.407968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.643 [2024-07-26 02:08:45.415814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.643 [2024-07-26 02:08:45.415845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.643 [2024-07-26 02:08:45.415863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.423678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.423710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.423728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.431632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.431665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.431682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.439702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.439735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.439754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.447601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.447634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.447651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.455553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.455586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.455604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.463615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.463647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.463664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.471460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.471493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.471517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.479258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.479288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.479304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.487191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.487221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.487237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.495344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.495390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.495409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.502923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.502952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.502984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.510384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.510428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.510447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.517839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.517872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.517891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.525313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.525343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.525359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.533136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.533166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.533182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.541046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.541118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.541136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.548936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.548968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.548986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.557043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.557082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.557101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.564997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.565029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.565048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.572989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.573022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.573041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.580797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.580832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.580851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.588618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.588651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.588670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.596433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.596466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.596484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.604227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.604257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.604282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.612089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.644 [2024-07-26 02:08:45.612144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.644 [2024-07-26 02:08:45.612161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.644 [2024-07-26 02:08:45.620006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.645 [2024-07-26 02:08:45.620040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.645 [2024-07-26 02:08:45.620065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.645 [2024-07-26 02:08:45.628019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.645 [2024-07-26 02:08:45.628053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.645 [2024-07-26 02:08:45.628081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.645 [2024-07-26 02:08:45.636628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.645 [2024-07-26 02:08:45.636662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.645 [2024-07-26 02:08:45.636681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.645 [2024-07-26 02:08:45.644658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.645 [2024-07-26 02:08:45.644691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.645 [2024-07-26 02:08:45.644709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.645 [2024-07-26 02:08:45.652660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.645 [2024-07-26 02:08:45.652693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.645 [2024-07-26 02:08:45.652711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.902 [2024-07-26 02:08:45.660420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.902 [2024-07-26 02:08:45.660451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.902 [2024-07-26 02:08:45.660469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.902 [2024-07-26 02:08:45.668296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.902 [2024-07-26 02:08:45.668325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.902 [2024-07-26 02:08:45.668342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.902 [2024-07-26 02:08:45.676098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.902 [2024-07-26 02:08:45.676134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.902 [2024-07-26 02:08:45.676152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.902 [2024-07-26 02:08:45.683752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.902 [2024-07-26 02:08:45.683783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.902 [2024-07-26 02:08:45.683800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.902 [2024-07-26 02:08:45.691175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.902 [2024-07-26 02:08:45.691206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.902 [2024-07-26 02:08:45.691223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.902 [2024-07-26 02:08:45.698491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.902 [2024-07-26 02:08:45.698521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.902 [2024-07-26 02:08:45.698538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.902 [2024-07-26 02:08:45.706202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.902 [2024-07-26 02:08:45.706233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.902 [2024-07-26 02:08:45.706250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.902 [2024-07-26 02:08:45.713781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.902 [2024-07-26 02:08:45.713812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.902 [2024-07-26 02:08:45.713829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.902 [2024-07-26 02:08:45.721271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.902 [2024-07-26 02:08:45.721312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.902 [2024-07-26 02:08:45.721329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.902 [2024-07-26 02:08:45.728837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.902 [2024-07-26 02:08:45.728868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.902 [2024-07-26 02:08:45.728884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.902 [2024-07-26 02:08:45.736158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.902 [2024-07-26 02:08:45.736188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.902 [2024-07-26 02:08:45.736204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.902 [2024-07-26 02:08:45.743420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.902 [2024-07-26 02:08:45.743451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.902 [2024-07-26 02:08:45.743468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.902 [2024-07-26 02:08:45.750815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.902 [2024-07-26 02:08:45.750846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.902 [2024-07-26 02:08:45.750863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.902 [2024-07-26 02:08:45.757991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.902 [2024-07-26 02:08:45.758021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.902 [2024-07-26 02:08:45.758038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.902 [2024-07-26 02:08:45.765363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.903 [2024-07-26 02:08:45.765407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.903 [2024-07-26 02:08:45.765423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.903 [2024-07-26 02:08:45.772544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.903 [2024-07-26 02:08:45.772574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.903 [2024-07-26 02:08:45.772590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.903 [2024-07-26 02:08:45.779745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.903 [2024-07-26 02:08:45.779774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.903 [2024-07-26 02:08:45.779791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.903 [2024-07-26 02:08:45.787056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.903 [2024-07-26 02:08:45.787093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.903 [2024-07-26 02:08:45.787109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.903 [2024-07-26 02:08:45.794644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bc5130) 00:33:03.903 [2024-07-26 02:08:45.794674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.903 [2024-07-26 02:08:45.794691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.903 00:33:03.903 Latency(us) 00:33:03.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.903 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:03.903 nvme0n1 : 2.00 3941.41 492.68 0.00 0.00 4053.75 752.45 10582.85 00:33:03.903 =================================================================================================================== 00:33:03.903 Total : 3941.41 492.68 0.00 0.00 4053.75 752.45 10582.85 00:33:03.903 0 00:33:03.903 02:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:03.903 02:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:03.903 02:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:03.903 | .driver_specific 00:33:03.903 | .nvme_error 00:33:03.903 | .status_code 00:33:03.903 | .command_transient_transport_error' 00:33:03.903 02:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:04.162 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 254 > 0 )) 00:33:04.162 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2413702 00:33:04.162 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2413702 ']' 00:33:04.162 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2413702 00:33:04.162 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:04.162 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:04.162 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2413702 00:33:04.162 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:04.162 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:04.162 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2413702' 00:33:04.162 killing process with pid 2413702 00:33:04.162 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2413702 00:33:04.162 Received shutdown signal, test time was about 2.000000 seconds 00:33:04.162 00:33:04.162 Latency(us) 00:33:04.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.162 =================================================================================================================== 00:33:04.162 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:04.162 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2413702 00:33:04.420 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:04.420 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:04.420 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:04.420 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:04.420 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:04.420 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2414107 00:33:04.420 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:04.420 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2414107 /var/tmp/bperf.sock 00:33:04.420 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2414107 ']' 00:33:04.420 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:04.420 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:04.420 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:04.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:04.420 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:04.420 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:04.420 [2024-07-26 02:08:46.360249] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:33:04.420 [2024-07-26 02:08:46.360326] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2414107 ] 00:33:04.420 EAL: No free 2048 kB hugepages reported on node 1 00:33:04.420 [2024-07-26 02:08:46.418070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.677 [2024-07-26 02:08:46.505114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.677 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:04.677 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:04.677 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:04.677 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:04.935 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:04.935 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.935 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:04.935 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.935 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:04.935 02:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:05.503 nvme0n1 00:33:05.503 02:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:05.503 02:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.503 02:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:05.503 02:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.503 02:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:05.503 02:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:05.503 Running I/O for 2 seconds... 00:33:05.503 [2024-07-26 02:08:47.467642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190edd58 00:33:05.503 [2024-07-26 02:08:47.468765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.503 [2024-07-26 02:08:47.468810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.503 [2024-07-26 02:08:47.479901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fa3a0 00:33:05.503 [2024-07-26 02:08:47.480989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.503 [2024-07-26 02:08:47.481024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.503 [2024-07-26 02:08:47.493240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e4140 00:33:05.503 [2024-07-26 02:08:47.494490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.503 [2024-07-26 02:08:47.494524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.503 [2024-07-26 02:08:47.506524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e5658 00:33:05.503 [2024-07-26 02:08:47.507917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.503 [2024-07-26 02:08:47.507960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.762 [2024-07-26 02:08:47.519970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e7c50 00:33:05.762 [2024-07-26 02:08:47.521562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.762 [2024-07-26 02:08:47.521609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.762 [2024-07-26 02:08:47.533395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fa3a0 00:33:05.762 [2024-07-26 02:08:47.535178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.762 [2024-07-26 02:08:47.535224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.762 [2024-07-26 02:08:47.546623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f0bc0 00:33:05.762 [2024-07-26 02:08:47.548525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.762 [2024-07-26 02:08:47.548573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.762 [2024-07-26 02:08:47.559954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ed4e8 00:33:05.762 [2024-07-26 02:08:47.562084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.762 [2024-07-26 02:08:47.562131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.762 [2024-07-26 02:08:47.568988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fc560 00:33:05.762 [2024-07-26 02:08:47.569913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.762 [2024-07-26 02:08:47.569941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.762 [2024-07-26 02:08:47.581002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ec840 00:33:05.762 [2024-07-26 02:08:47.581898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.762 [2024-07-26 02:08:47.581937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.762 [2024-07-26 02:08:47.594321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f96f8 00:33:05.762 [2024-07-26 02:08:47.595385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.762 [2024-07-26 02:08:47.595427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.762 [2024-07-26 02:08:47.607726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ec408 00:33:05.762 [2024-07-26 02:08:47.608995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.762 [2024-07-26 02:08:47.609039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.762 [2024-07-26 02:08:47.621021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e49b0 00:33:05.762 [2024-07-26 02:08:47.622450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.762 [2024-07-26 02:08:47.622477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.762 [2024-07-26 02:08:47.634291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e0ea0 00:33:05.763 [2024-07-26 02:08:47.635892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.763 [2024-07-26 02:08:47.635935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.763 [2024-07-26 02:08:47.646206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f96f8 00:33:05.763 [2024-07-26 02:08:47.647301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.763 [2024-07-26 02:08:47.647330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.763 [2024-07-26 02:08:47.659021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e4140 00:33:05.763 [2024-07-26 02:08:47.659943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.763 [2024-07-26 02:08:47.659972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.763 [2024-07-26 02:08:47.672265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e12d8 00:33:05.763 [2024-07-26 02:08:47.673397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.763 [2024-07-26 02:08:47.673426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.763 [2024-07-26 02:08:47.684215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e1f80 00:33:05.763 [2024-07-26 02:08:47.686054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.763 [2024-07-26 02:08:47.686094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.763 [2024-07-26 02:08:47.694987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f0788 00:33:05.763 [2024-07-26 02:08:47.695902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.763 [2024-07-26 02:08:47.695929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.763 [2024-07-26 02:08:47.708501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fa3a0 00:33:05.763 [2024-07-26 02:08:47.709567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.763 [2024-07-26 02:08:47.709612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.763 [2024-07-26 02:08:47.721755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e4140 00:33:05.763 [2024-07-26 02:08:47.722985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.763 [2024-07-26 02:08:47.723028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.763 [2024-07-26 02:08:47.735048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e5658 00:33:05.763 [2024-07-26 02:08:47.736494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.763 [2024-07-26 02:08:47.736523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.763 [2024-07-26 02:08:47.748355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e38d0 00:33:05.763 [2024-07-26 02:08:47.749948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.763 [2024-07-26 02:08:47.749991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.763 [2024-07-26 02:08:47.761647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fa3a0 00:33:05.763 [2024-07-26 02:08:47.763378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.763 [2024-07-26 02:08:47.763420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.773508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f5378 00:33:06.021 [2024-07-26 02:08:47.774772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.774799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.786297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e9168 00:33:06.021 [2024-07-26 02:08:47.787391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.787421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.798235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f7da8 00:33:06.021 [2024-07-26 02:08:47.800163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.800192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.808948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f35f0 00:33:06.021 [2024-07-26 02:08:47.809753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.809796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.822230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f9f68 00:33:06.021 [2024-07-26 02:08:47.823366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.823394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.834625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f2948 00:33:06.021 [2024-07-26 02:08:47.835870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.835913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.847984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e5220 00:33:06.021 [2024-07-26 02:08:47.849505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.849548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.861368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ecc78 00:33:06.021 [2024-07-26 02:08:47.862894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.862936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.874693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f9f68 00:33:06.021 [2024-07-26 02:08:47.876498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.876532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.888445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f1868 00:33:06.021 [2024-07-26 02:08:47.890498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.890534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.902090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ef270 00:33:06.021 [2024-07-26 02:08:47.904271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.904301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.911192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e6fa8 00:33:06.021 [2024-07-26 02:08:47.912146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.912196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.924740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190edd58 00:33:06.021 [2024-07-26 02:08:47.925845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.925879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.938126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e3060 00:33:06.021 [2024-07-26 02:08:47.939406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.939450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.951488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ee5c8 00:33:06.021 [2024-07-26 02:08:47.952940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.952974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.964883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e6b70 00:33:06.021 [2024-07-26 02:08:47.966539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.966581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.977018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190efae0 00:33:06.021 [2024-07-26 02:08:47.978644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.978682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:06.021 [2024-07-26 02:08:47.990380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fa3a0 00:33:06.021 [2024-07-26 02:08:47.992177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.021 [2024-07-26 02:08:47.992221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:06.022 [2024-07-26 02:08:48.003697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e1f80 00:33:06.022 [2024-07-26 02:08:48.005644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.022 [2024-07-26 02:08:48.005677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:06.022 [2024-07-26 02:08:48.016906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f7100 00:33:06.022 [2024-07-26 02:08:48.018996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.022 [2024-07-26 02:08:48.019028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:06.022 [2024-07-26 02:08:48.025941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fc560 00:33:06.022 [2024-07-26 02:08:48.026858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.022 [2024-07-26 02:08:48.026905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.039381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190eea00 00:33:06.281 [2024-07-26 02:08:48.040489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.040531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.053759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ed920 00:33:06.281 [2024-07-26 02:08:48.055524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.055551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.067034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e9e10 00:33:06.281 [2024-07-26 02:08:48.068952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.068985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.080391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e3498 00:33:06.281 [2024-07-26 02:08:48.082455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.082498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.089395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f1ca0 00:33:06.281 [2024-07-26 02:08:48.090297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.090339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.101484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e38d0 00:33:06.281 [2024-07-26 02:08:48.102363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.102405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.114718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190eee38 00:33:06.281 [2024-07-26 02:08:48.115773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.115805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.128091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f96f8 00:33:06.281 [2024-07-26 02:08:48.129331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.129372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.141341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fa3a0 00:33:06.281 [2024-07-26 02:08:48.142755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.142782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.154741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e88f8 00:33:06.281 [2024-07-26 02:08:48.156329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.156357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.168041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190eee38 00:33:06.281 [2024-07-26 02:08:48.169799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.169840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.181267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e9168 00:33:06.281 [2024-07-26 02:08:48.183236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.183279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.194530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ed0b0 00:33:06.281 [2024-07-26 02:08:48.196613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.196645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.203562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190de038 00:33:06.281 [2024-07-26 02:08:48.204460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.204503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.216851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fd208 00:33:06.281 [2024-07-26 02:08:48.217949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.217981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.228834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ee190 00:33:06.281 [2024-07-26 02:08:48.229918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.229959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.242170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fa3a0 00:33:06.281 [2024-07-26 02:08:48.243432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.243464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.256320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190df550 00:33:06.281 [2024-07-26 02:08:48.257785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.257832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.268238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f96f8 00:33:06.281 [2024-07-26 02:08:48.269655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.269687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:06.281 [2024-07-26 02:08:48.281553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e0a68 00:33:06.281 [2024-07-26 02:08:48.283134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.281 [2024-07-26 02:08:48.283161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.294700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e5a90 00:33:06.573 [2024-07-26 02:08:48.296375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.296404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.307602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f7da8 00:33:06.573 [2024-07-26 02:08:48.309529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.309570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.320944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fd208 00:33:06.573 [2024-07-26 02:08:48.323068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.323110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.329949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e7c50 00:33:06.573 [2024-07-26 02:08:48.330847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.330879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.343182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f0ff8 00:33:06.573 [2024-07-26 02:08:48.344264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.344307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.355207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190edd58 00:33:06.573 [2024-07-26 02:08:48.356290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.356339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.368479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f96f8 00:33:06.573 [2024-07-26 02:08:48.369707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.369734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.381751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fa3a0 00:33:06.573 [2024-07-26 02:08:48.383235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.383278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.394977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ea680 00:33:06.573 [2024-07-26 02:08:48.396548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.396579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.408262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190edd58 00:33:06.573 [2024-07-26 02:08:48.410026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.410067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.421497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ea248 00:33:06.573 [2024-07-26 02:08:48.423395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.423422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.433308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fe720 00:33:06.573 [2024-07-26 02:08:48.434731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.434759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.444899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e73e0 00:33:06.573 [2024-07-26 02:08:48.446787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.446819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.455802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ff3c8 00:33:06.573 [2024-07-26 02:08:48.456710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.456737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.469142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e5a90 00:33:06.573 [2024-07-26 02:08:48.470256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.470299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.482534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f4298 00:33:06.573 [2024-07-26 02:08:48.483775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.483804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.495926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fa7d8 00:33:06.573 [2024-07-26 02:08:48.497369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.497398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.507857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e1b48 00:33:06.573 [2024-07-26 02:08:48.508765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.508809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.520715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fda78 00:33:06.573 [2024-07-26 02:08:48.521483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.521512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.573 [2024-07-26 02:08:48.533754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e99d8 00:33:06.573 [2024-07-26 02:08:48.534630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.573 [2024-07-26 02:08:48.534662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.574 [2024-07-26 02:08:48.547155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e1710 00:33:06.574 [2024-07-26 02:08:48.548211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.574 [2024-07-26 02:08:48.548245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.574 [2024-07-26 02:08:48.559220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ddc00 00:33:06.574 [2024-07-26 02:08:48.561140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.574 [2024-07-26 02:08:48.561169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.832 [2024-07-26 02:08:48.570152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e8d30 00:33:06.832 [2024-07-26 02:08:48.571075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.832 [2024-07-26 02:08:48.571116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:06.832 [2024-07-26 02:08:48.583382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190edd58 00:33:06.832 [2024-07-26 02:08:48.584509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.832 [2024-07-26 02:08:48.584541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:06.832 [2024-07-26 02:08:48.596699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e99d8 00:33:06.832 [2024-07-26 02:08:48.597899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.832 [2024-07-26 02:08:48.597930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:06.832 [2024-07-26 02:08:48.610851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fb480 00:33:06.832 [2024-07-26 02:08:48.612307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.832 [2024-07-26 02:08:48.612355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.832 [2024-07-26 02:08:48.623966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ec408 00:33:06.832 [2024-07-26 02:08:48.625572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.832 [2024-07-26 02:08:48.625614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.832 [2024-07-26 02:08:48.636047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f0350 00:33:06.832 [2024-07-26 02:08:48.637616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.832 [2024-07-26 02:08:48.637648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:06.832 [2024-07-26 02:08:48.647870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ee5c8 00:33:06.832 [2024-07-26 02:08:48.648948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.832 [2024-07-26 02:08:48.648990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.832 [2024-07-26 02:08:48.660728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e23b8 00:33:06.832 [2024-07-26 02:08:48.661665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.832 [2024-07-26 02:08:48.661694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.832 [2024-07-26 02:08:48.673929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f0788 00:33:06.832 [2024-07-26 02:08:48.675021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.832 [2024-07-26 02:08:48.675050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.832 [2024-07-26 02:08:48.685922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f81e0 00:33:06.832 [2024-07-26 02:08:48.687795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.832 [2024-07-26 02:08:48.687832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.832 [2024-07-26 02:08:48.696837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f3a28 00:33:06.832 [2024-07-26 02:08:48.697743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.832 [2024-07-26 02:08:48.697784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:06.832 [2024-07-26 02:08:48.710181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e5a90 00:33:06.832 [2024-07-26 02:08:48.711226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.832 [2024-07-26 02:08:48.711270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:06.833 [2024-07-26 02:08:48.722976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f0bc0 00:33:06.833 [2024-07-26 02:08:48.724036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.833 [2024-07-26 02:08:48.724076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:06.833 [2024-07-26 02:08:48.736314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f20d8 00:33:06.833 [2024-07-26 02:08:48.737603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.833 [2024-07-26 02:08:48.737632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:06.833 [2024-07-26 02:08:48.749689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f1ca0 00:33:06.833 [2024-07-26 02:08:48.751110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.833 [2024-07-26 02:08:48.751139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:06.833 [2024-07-26 02:08:48.762971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fe2e8 00:33:06.833 [2024-07-26 02:08:48.764596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.833 [2024-07-26 02:08:48.764625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:06.833 [2024-07-26 02:08:48.776306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f0bc0 00:33:06.833 [2024-07-26 02:08:48.778067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.833 [2024-07-26 02:08:48.778108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:06.833 [2024-07-26 02:08:48.789601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e5658 00:33:06.833 [2024-07-26 02:08:48.791502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.833 [2024-07-26 02:08:48.791534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:06.833 [2024-07-26 02:08:48.801454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e0a68 00:33:06.833 [2024-07-26 02:08:48.802876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.833 [2024-07-26 02:08:48.802918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.833 [2024-07-26 02:08:48.812961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fd208 00:33:06.833 [2024-07-26 02:08:48.814826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.833 [2024-07-26 02:08:48.814859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:06.833 [2024-07-26 02:08:48.823877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fb048 00:33:06.833 [2024-07-26 02:08:48.824762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.833 [2024-07-26 02:08:48.824792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:06.833 [2024-07-26 02:08:48.837219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190eee38 00:33:06.833 [2024-07-26 02:08:48.838282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.833 [2024-07-26 02:08:48.838323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:07.091 [2024-07-26 02:08:48.850577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e2c28 00:33:07.091 [2024-07-26 02:08:48.851811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.091 [2024-07-26 02:08:48.851855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:07.091 [2024-07-26 02:08:48.863941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e27f0 00:33:07.091 [2024-07-26 02:08:48.865335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.091 [2024-07-26 02:08:48.865377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:07.091 [2024-07-26 02:08:48.877171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f0350 00:33:07.091 [2024-07-26 02:08:48.878691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.091 [2024-07-26 02:08:48.878738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:07.091 [2024-07-26 02:08:48.890418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190eee38 00:33:07.091 [2024-07-26 02:08:48.892180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.091 [2024-07-26 02:08:48.892222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:07.091 [2024-07-26 02:08:48.903648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fbcf0 00:33:07.091 [2024-07-26 02:08:48.905548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.092 [2024-07-26 02:08:48.905580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:07.092 [2024-07-26 02:08:48.916501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f2948 00:33:07.092 [2024-07-26 02:08:48.918416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.092 [2024-07-26 02:08:48.918449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:07.092 [2024-07-26 02:08:48.925251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f6cc8 00:33:07.092 [2024-07-26 02:08:48.926168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.092 [2024-07-26 02:08:48.926196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:07.092 [2024-07-26 02:08:48.938593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e5a90 00:33:07.092 [2024-07-26 02:08:48.939627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.092 [2024-07-26 02:08:48.939671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:07.092 [2024-07-26 02:08:48.950585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f0bc0 00:33:07.092 [2024-07-26 02:08:48.951633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.092 [2024-07-26 02:08:48.951660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:07.092 [2024-07-26 02:08:48.963961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e27f0 00:33:07.092 [2024-07-26 02:08:48.965197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.092 [2024-07-26 02:08:48.965239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:07.092 [2024-07-26 02:08:48.977282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e2c28 00:33:07.092 [2024-07-26 02:08:48.978672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.092 [2024-07-26 02:08:48.978715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:07.092 [2024-07-26 02:08:48.990567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190eb328 00:33:07.092 [2024-07-26 02:08:48.992131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.092 [2024-07-26 02:08:48.992161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:07.092 [2024-07-26 02:08:49.003887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f0bc0 00:33:07.092 [2024-07-26 02:08:49.005680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.092 [2024-07-26 02:08:49.005725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:07.092 [2024-07-26 02:08:49.017304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fc128 00:33:07.092 [2024-07-26 02:08:49.019257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.092 [2024-07-26 02:08:49.019309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:07.092 [2024-07-26 02:08:49.030693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ef6a8 00:33:07.092 [2024-07-26 02:08:49.032713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.092 [2024-07-26 02:08:49.032757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:07.092 [2024-07-26 02:08:49.039646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190de470 00:33:07.092 [2024-07-26 02:08:49.040519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.092 [2024-07-26 02:08:49.040565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:07.092 [2024-07-26 02:08:49.053085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e5ec8 00:33:07.092 [2024-07-26 02:08:49.054247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.092 [2024-07-26 02:08:49.054291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:07.092 [2024-07-26 02:08:49.065148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190eee38 00:33:07.092 [2024-07-26 02:08:49.066220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.092 [2024-07-26 02:08:49.066249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:07.092 [2024-07-26 02:08:49.078472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e2c28 00:33:07.092 [2024-07-26 02:08:49.079696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.092 [2024-07-26 02:08:49.079741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:07.092 [2024-07-26 02:08:49.091857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e27f0 00:33:07.092 [2024-07-26 02:08:49.093264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.092 [2024-07-26 02:08:49.093307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:07.352 [2024-07-26 02:08:49.105250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f0350 00:33:07.352 [2024-07-26 02:08:49.106868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.352 [2024-07-26 02:08:49.106912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:07.352 [2024-07-26 02:08:49.118650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190eee38 00:33:07.352 [2024-07-26 02:08:49.120369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.352 [2024-07-26 02:08:49.120396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:07.352 [2024-07-26 02:08:49.131807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fbcf0 00:33:07.352 [2024-07-26 02:08:49.133699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.352 [2024-07-26 02:08:49.133753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:07.352 [2024-07-26 02:08:49.143613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190df988 00:33:07.353 [2024-07-26 02:08:49.145040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.145088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.155170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fb8b8 00:33:07.353 [2024-07-26 02:08:49.156551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.156578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.168387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fef90 00:33:07.353 [2024-07-26 02:08:49.169949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.169992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.181593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ed0b0 00:33:07.353 [2024-07-26 02:08:49.183317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.183359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.194779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fe720 00:33:07.353 [2024-07-26 02:08:49.196727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.196769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.208006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e4140 00:33:07.353 [2024-07-26 02:08:49.210090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.210133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.216983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f1868 00:33:07.353 [2024-07-26 02:08:49.217880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.217906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.230314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190de8a8 00:33:07.353 [2024-07-26 02:08:49.231361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.231404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.243593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e5a90 00:33:07.353 [2024-07-26 02:08:49.244838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.244866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.255720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190fc998 00:33:07.353 [2024-07-26 02:08:49.257034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.257086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.268793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f6020 00:33:07.353 [2024-07-26 02:08:49.270014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.270069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.282078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e5ec8 00:33:07.353 [2024-07-26 02:08:49.283458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.283503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.295411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e95a0 00:33:07.353 [2024-07-26 02:08:49.296958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.297003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.307244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f5378 00:33:07.353 [2024-07-26 02:08:49.308292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.308334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.320093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190edd58 00:33:07.353 [2024-07-26 02:08:49.320980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.321009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.333380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f96f8 00:33:07.353 [2024-07-26 02:08:49.334450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.334480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.345296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f0ff8 00:33:07.353 [2024-07-26 02:08:49.347175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.347203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:07.353 [2024-07-26 02:08:49.356228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f5be8 00:33:07.353 [2024-07-26 02:08:49.357088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.353 [2024-07-26 02:08:49.357132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:07.612 [2024-07-26 02:08:49.369570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ef270 00:33:07.612 [2024-07-26 02:08:49.370604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.612 [2024-07-26 02:08:49.370648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:07.612 [2024-07-26 02:08:49.382851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190edd58 00:33:07.612 [2024-07-26 02:08:49.384081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.612 [2024-07-26 02:08:49.384124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:07.612 [2024-07-26 02:08:49.396187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e4de8 00:33:07.612 [2024-07-26 02:08:49.397550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.612 [2024-07-26 02:08:49.397594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:07.612 [2024-07-26 02:08:49.407944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e99d8 00:33:07.612 [2024-07-26 02:08:49.408803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.612 [2024-07-26 02:08:49.408847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:07.612 [2024-07-26 02:08:49.420754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190e7c50 00:33:07.612 [2024-07-26 02:08:49.421487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.612 [2024-07-26 02:08:49.421520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:07.612 [2024-07-26 02:08:49.434041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190ed920 00:33:07.612 [2024-07-26 02:08:49.434920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.612 [2024-07-26 02:08:49.434948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:07.612 [2024-07-26 02:08:49.447259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc0fcc0) with pdu=0x2000190f5be8 00:33:07.612 [2024-07-26 02:08:49.448318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:07.612 [2024-07-26 02:08:49.448347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:07.612 00:33:07.612 Latency(us) 00:33:07.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.612 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:07.612 nvme0n1 : 2.01 20048.79 78.32 0.00 0.00 6373.37 2682.12 18058.81 00:33:07.612 =================================================================================================================== 00:33:07.612 Total : 20048.79 78.32 0.00 0.00 6373.37 2682.12 18058.81 00:33:07.612 0 00:33:07.612 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:07.612 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:07.612 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:07.612 | .driver_specific 00:33:07.612 | .nvme_error 00:33:07.612 | .status_code 00:33:07.612 | .command_transient_transport_error' 00:33:07.612 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:07.872 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:33:07.872 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2414107 00:33:07.872 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2414107 ']' 00:33:07.872 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2414107 00:33:07.872 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:07.872 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:07.872 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2414107 00:33:07.872 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:07.872 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:07.872 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2414107' 00:33:07.872 killing process with pid 2414107 00:33:07.872 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2414107 00:33:07.872 Received shutdown signal, test time was about 2.000000 seconds 00:33:07.872 00:33:07.872 Latency(us) 00:33:07.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.872 =================================================================================================================== 00:33:07.872 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:07.872 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2414107 00:33:08.130 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:08.131 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:08.131 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:08.131 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:08.131 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:08.131 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2414519 00:33:08.131 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:08.131 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2414519 /var/tmp/bperf.sock 00:33:08.131 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2414519 ']' 00:33:08.131 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:08.131 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:08.131 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:08.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:08.131 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:08.131 02:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:08.131 [2024-07-26 02:08:50.024257] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:33:08.131 [2024-07-26 02:08:50.024342] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2414519 ] 00:33:08.131 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:08.131 Zero copy mechanism will not be used. 00:33:08.131 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.131 [2024-07-26 02:08:50.086921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.389 [2024-07-26 02:08:50.186021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.389 02:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:08.389 02:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:33:08.389 02:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:08.389 02:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:08.647 02:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:08.647 02:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.647 02:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:08.647 02:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.647 02:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:08.647 02:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:09.214 nvme0n1 00:33:09.214 02:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:09.214 02:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.214 02:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:09.214 02:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.214 02:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:09.214 02:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:09.214 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:09.214 Zero copy mechanism will not be used. 00:33:09.214 Running I/O for 2 seconds... 00:33:09.214 [2024-07-26 02:08:51.099936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.214 [2024-07-26 02:08:51.100379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.214 [2024-07-26 02:08:51.100438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.214 [2024-07-26 02:08:51.109486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.214 [2024-07-26 02:08:51.109879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.214 [2024-07-26 02:08:51.109915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.214 [2024-07-26 02:08:51.118801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.214 [2024-07-26 02:08:51.119174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.214 [2024-07-26 02:08:51.119206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.214 [2024-07-26 02:08:51.128176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.214 [2024-07-26 02:08:51.128533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.214 [2024-07-26 02:08:51.128566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.214 [2024-07-26 02:08:51.136083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.214 [2024-07-26 02:08:51.136498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.214 [2024-07-26 02:08:51.136532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.214 [2024-07-26 02:08:51.144056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.214 [2024-07-26 02:08:51.144419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.214 [2024-07-26 02:08:51.144453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.214 [2024-07-26 02:08:51.151835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.214 [2024-07-26 02:08:51.152204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.214 [2024-07-26 02:08:51.152234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.214 [2024-07-26 02:08:51.159898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.214 [2024-07-26 02:08:51.160241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.214 [2024-07-26 02:08:51.160274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.214 [2024-07-26 02:08:51.167617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.214 [2024-07-26 02:08:51.167962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.214 [2024-07-26 02:08:51.167996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.214 [2024-07-26 02:08:51.175272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.214 [2024-07-26 02:08:51.175707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.214 [2024-07-26 02:08:51.175740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.214 [2024-07-26 02:08:51.183324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.214 [2024-07-26 02:08:51.183693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.214 [2024-07-26 02:08:51.183723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.214 [2024-07-26 02:08:51.191181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.214 [2024-07-26 02:08:51.191527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.214 [2024-07-26 02:08:51.191560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.214 [2024-07-26 02:08:51.198973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.214 [2024-07-26 02:08:51.199335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.214 [2024-07-26 02:08:51.199365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.214 [2024-07-26 02:08:51.206649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.214 [2024-07-26 02:08:51.206995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.214 [2024-07-26 02:08:51.207024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.214 [2024-07-26 02:08:51.214295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.214 [2024-07-26 02:08:51.214656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.214 [2024-07-26 02:08:51.214703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.214 [2024-07-26 02:08:51.221945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.214 [2024-07-26 02:08:51.222284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.214 [2024-07-26 02:08:51.222318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.475 [2024-07-26 02:08:51.229304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.475 [2024-07-26 02:08:51.229639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.475 [2024-07-26 02:08:51.229671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.475 [2024-07-26 02:08:51.237011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.475 [2024-07-26 02:08:51.237364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.475 [2024-07-26 02:08:51.237419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.475 [2024-07-26 02:08:51.245055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.475 [2024-07-26 02:08:51.245457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.475 [2024-07-26 02:08:51.245504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.475 [2024-07-26 02:08:51.252720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.475 [2024-07-26 02:08:51.253040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.475 [2024-07-26 02:08:51.253088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.475 [2024-07-26 02:08:51.260229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.475 [2024-07-26 02:08:51.260626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.475 [2024-07-26 02:08:51.260659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.475 [2024-07-26 02:08:51.267744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.475 [2024-07-26 02:08:51.268090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.475 [2024-07-26 02:08:51.268123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.475 [2024-07-26 02:08:51.275512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.475 [2024-07-26 02:08:51.275850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.475 [2024-07-26 02:08:51.275883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.475 [2024-07-26 02:08:51.283594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.475 [2024-07-26 02:08:51.283915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.475 [2024-07-26 02:08:51.283945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.475 [2024-07-26 02:08:51.291568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.475 [2024-07-26 02:08:51.291870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.475 [2024-07-26 02:08:51.291899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.475 [2024-07-26 02:08:51.299678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.475 [2024-07-26 02:08:51.299994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.475 [2024-07-26 02:08:51.300027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.475 [2024-07-26 02:08:51.308322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.475 [2024-07-26 02:08:51.308649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.475 [2024-07-26 02:08:51.308683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.475 [2024-07-26 02:08:51.317335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.317657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.317686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.324705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.324854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.324884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.332183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.332508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.332539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.339581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.339932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.339961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.346952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.347313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.347360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.354725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.355046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.355084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.362220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.362564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.362594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.370136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.370478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.370523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.377518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.377860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.377889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.385207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.385559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.385587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.393108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.393444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.393473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.401089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.401426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.401470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.409610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.409917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.409945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.418932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.419271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.419300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.427957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.428303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.428347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.435983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.436094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.436123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.444629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.444772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.444808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.453929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.454291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.454320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.461729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.461839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.461868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.470420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.470774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.470804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.476 [2024-07-26 02:08:51.478910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.476 [2024-07-26 02:08:51.479079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.476 [2024-07-26 02:08:51.479107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.737 [2024-07-26 02:08:51.487491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.737 [2024-07-26 02:08:51.487832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.737 [2024-07-26 02:08:51.487861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.737 [2024-07-26 02:08:51.496157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.496517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.496550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.505328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.505676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.505706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.513939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.514300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.514331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.523133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.523506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.523549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.532280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.532601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.532629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.541486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.541828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.541857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.550639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.550960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.550991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.559924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.560296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.560327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.568747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.569103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.569134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.576636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.576748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.576777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.585607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.585938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.585966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.594713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.595068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.595098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.602973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.603315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.603345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.610879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.611234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.611264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.618163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.618513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.618556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.626383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.626729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.626759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.634740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.635077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.635107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.642597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.642928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.642956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.650971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.651128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.651157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.659314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.659726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.659753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.667580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.668029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.668084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.676200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.676563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.676591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.685084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.685500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.685528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.693629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.694052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.694087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.702131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.702333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.702369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.710433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.710849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.710877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.717829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.738 [2024-07-26 02:08:51.718211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.738 [2024-07-26 02:08:51.718254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.738 [2024-07-26 02:08:51.726338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.739 [2024-07-26 02:08:51.726757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.739 [2024-07-26 02:08:51.726785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.739 [2024-07-26 02:08:51.734836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.739 [2024-07-26 02:08:51.735299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.739 [2024-07-26 02:08:51.735328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.739 [2024-07-26 02:08:51.743347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.739 [2024-07-26 02:08:51.743736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.739 [2024-07-26 02:08:51.743779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.998 [2024-07-26 02:08:51.751751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.998 [2024-07-26 02:08:51.752101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.998 [2024-07-26 02:08:51.752131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.998 [2024-07-26 02:08:51.758820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.998 [2024-07-26 02:08:51.759077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.998 [2024-07-26 02:08:51.759110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.998 [2024-07-26 02:08:51.767323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.767688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.767731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.775858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.776192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.776236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.783549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.783927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.783956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.792099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.792462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.792510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.800050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.800384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.800413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.807067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.807389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.807422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.814758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.815082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.815112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.821630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.821954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.821997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.828983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.829297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.829331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.835853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.836229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.836259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.843812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.844149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.844178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.850910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.851300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.851330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.859284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.859699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.859742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.867215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.867509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.867543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.874620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.874920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.874958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.881808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.882144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.882177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.888618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.888988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.889016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.896144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.896465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.896493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.903029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.903328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.903374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.910269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.910578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.910620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.917280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.917608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.917637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.924175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.924466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.924500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.931563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.931909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.931952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.938723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.939057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.939106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.945553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.945870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.945902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.999 [2024-07-26 02:08:51.952601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:09.999 [2024-07-26 02:08:51.952945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.999 [2024-07-26 02:08:51.952972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.000 [2024-07-26 02:08:51.959375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.000 [2024-07-26 02:08:51.959710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.000 [2024-07-26 02:08:51.959753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.000 [2024-07-26 02:08:51.966189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.000 [2024-07-26 02:08:51.966534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.000 [2024-07-26 02:08:51.966562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.000 [2024-07-26 02:08:51.973570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.000 [2024-07-26 02:08:51.973947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.000 [2024-07-26 02:08:51.973976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.000 [2024-07-26 02:08:51.980259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.000 [2024-07-26 02:08:51.980568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.000 [2024-07-26 02:08:51.980596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.000 [2024-07-26 02:08:51.987608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.000 [2024-07-26 02:08:51.987959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.000 [2024-07-26 02:08:51.987996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.000 [2024-07-26 02:08:51.994916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.000 [2024-07-26 02:08:51.995291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.000 [2024-07-26 02:08:51.995321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.000 [2024-07-26 02:08:52.003961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.000 [2024-07-26 02:08:52.004326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.000 [2024-07-26 02:08:52.004356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.259 [2024-07-26 02:08:52.011265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.259 [2024-07-26 02:08:52.011606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.259 [2024-07-26 02:08:52.011635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.259 [2024-07-26 02:08:52.018164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.259 [2024-07-26 02:08:52.018474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.259 [2024-07-26 02:08:52.018503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.259 [2024-07-26 02:08:52.025674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.259 [2024-07-26 02:08:52.026072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.026102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.033725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.034047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.034085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.040638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.040979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.041009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.047361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.047666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.047696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.053869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.054200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.054230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.060960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.061264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.061306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.068292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.068627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.068657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.075404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.075728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.075759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.082332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.082667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.082695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.089560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.089872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.089916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.096965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.097266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.097300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.104134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.104484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.104512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.111476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.111795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.111839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.118753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.119056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.119112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.125978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.126301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.126331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.132989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.133315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.133348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.141785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.142157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.142186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.149086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.149396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.149444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.156217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.156548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.156576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.164143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.164487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.164516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.172774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.173173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.173215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.180089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.180389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.180418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.187486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.187843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.187878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.195609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.196088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.196115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.203890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.204251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.204281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.210934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.211265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.211294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.218166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.260 [2024-07-26 02:08:52.218484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.260 [2024-07-26 02:08:52.218512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.260 [2024-07-26 02:08:52.225254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.261 [2024-07-26 02:08:52.225566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.261 [2024-07-26 02:08:52.225597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.261 [2024-07-26 02:08:52.232422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.261 [2024-07-26 02:08:52.232784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.261 [2024-07-26 02:08:52.232813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.261 [2024-07-26 02:08:52.239570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.261 [2024-07-26 02:08:52.239878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.261 [2024-07-26 02:08:52.239922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.261 [2024-07-26 02:08:52.246652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.261 [2024-07-26 02:08:52.246996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.261 [2024-07-26 02:08:52.247025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.261 [2024-07-26 02:08:52.253347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.261 [2024-07-26 02:08:52.253712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.261 [2024-07-26 02:08:52.253740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.261 [2024-07-26 02:08:52.260189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.261 [2024-07-26 02:08:52.260466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.261 [2024-07-26 02:08:52.260507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.261 [2024-07-26 02:08:52.266564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.261 [2024-07-26 02:08:52.266880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.261 [2024-07-26 02:08:52.266909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.520 [2024-07-26 02:08:52.273025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.520 [2024-07-26 02:08:52.273346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.520 [2024-07-26 02:08:52.273375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.520 [2024-07-26 02:08:52.279858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.520 [2024-07-26 02:08:52.280151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.520 [2024-07-26 02:08:52.280181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.520 [2024-07-26 02:08:52.286761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.520 [2024-07-26 02:08:52.287097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.520 [2024-07-26 02:08:52.287127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.520 [2024-07-26 02:08:52.293164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.520 [2024-07-26 02:08:52.293514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.520 [2024-07-26 02:08:52.293543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.520 [2024-07-26 02:08:52.299967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.520 [2024-07-26 02:08:52.300305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.520 [2024-07-26 02:08:52.300335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.520 [2024-07-26 02:08:52.306622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.520 [2024-07-26 02:08:52.306989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.520 [2024-07-26 02:08:52.307039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.520 [2024-07-26 02:08:52.313052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.520 [2024-07-26 02:08:52.313355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.520 [2024-07-26 02:08:52.313385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.520 [2024-07-26 02:08:52.319909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.520 [2024-07-26 02:08:52.320207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.520 [2024-07-26 02:08:52.320243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.520 [2024-07-26 02:08:52.327805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.520 [2024-07-26 02:08:52.328126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.520 [2024-07-26 02:08:52.328168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.520 [2024-07-26 02:08:52.335606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.520 [2024-07-26 02:08:52.335968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.520 [2024-07-26 02:08:52.335996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.520 [2024-07-26 02:08:52.343503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.520 [2024-07-26 02:08:52.343832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.520 [2024-07-26 02:08:52.343861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.520 [2024-07-26 02:08:52.351661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.520 [2024-07-26 02:08:52.351985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.520 [2024-07-26 02:08:52.352034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.520 [2024-07-26 02:08:52.359653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.520 [2024-07-26 02:08:52.360042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.360079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.367130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.367496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.367540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.375323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.375710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.375761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.383520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.383835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.383865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.392203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.392571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.392615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.400852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.401171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.401202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.408936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.409316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.409361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.417229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.417641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.417675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.426512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.426932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.426962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.434811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.435177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.435208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.442708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.443078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.443108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.450887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.451196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.451226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.458847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.459235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.459265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.467116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.467536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.467565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.475177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.475513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.475543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.483452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.483800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.483834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.491372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.491777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.491806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.499528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.499872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.499901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.507842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.508224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.508254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.516251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.516599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.516629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.521 [2024-07-26 02:08:52.524660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.521 [2024-07-26 02:08:52.525037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.521 [2024-07-26 02:08:52.525084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.793 [2024-07-26 02:08:52.532793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.793 [2024-07-26 02:08:52.533184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.793 [2024-07-26 02:08:52.533214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.793 [2024-07-26 02:08:52.540893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.793 [2024-07-26 02:08:52.541254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.793 [2024-07-26 02:08:52.541284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.793 [2024-07-26 02:08:52.549053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.793 [2024-07-26 02:08:52.549368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.793 [2024-07-26 02:08:52.549401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.793 [2024-07-26 02:08:52.556492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.793 [2024-07-26 02:08:52.556771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.793 [2024-07-26 02:08:52.556807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.793 [2024-07-26 02:08:52.564286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.793 [2024-07-26 02:08:52.564659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.793 [2024-07-26 02:08:52.564690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.793 [2024-07-26 02:08:52.572325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.793 [2024-07-26 02:08:52.572728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.793 [2024-07-26 02:08:52.572758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.794 [2024-07-26 02:08:52.580784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.794 [2024-07-26 02:08:52.581137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.794 [2024-07-26 02:08:52.581167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.794 [2024-07-26 02:08:52.588978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.794 [2024-07-26 02:08:52.589355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.794 [2024-07-26 02:08:52.589395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.794 [2024-07-26 02:08:52.596947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.794 [2024-07-26 02:08:52.597321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.794 [2024-07-26 02:08:52.597355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.794 [2024-07-26 02:08:52.605243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.794 [2024-07-26 02:08:52.605654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.794 [2024-07-26 02:08:52.605687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.794 [2024-07-26 02:08:52.613450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.794 [2024-07-26 02:08:52.613833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.794 [2024-07-26 02:08:52.613877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.794 [2024-07-26 02:08:52.621886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.794 [2024-07-26 02:08:52.622252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.794 [2024-07-26 02:08:52.622282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.794 [2024-07-26 02:08:52.629997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.794 [2024-07-26 02:08:52.630395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.794 [2024-07-26 02:08:52.630443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.794 [2024-07-26 02:08:52.638241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.795 [2024-07-26 02:08:52.638513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.795 [2024-07-26 02:08:52.638542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.795 [2024-07-26 02:08:52.646343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.795 [2024-07-26 02:08:52.646690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.795 [2024-07-26 02:08:52.646719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.795 [2024-07-26 02:08:52.654652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.795 [2024-07-26 02:08:52.655055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.795 [2024-07-26 02:08:52.655091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.795 [2024-07-26 02:08:52.663027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.795 [2024-07-26 02:08:52.663384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.795 [2024-07-26 02:08:52.663413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.795 [2024-07-26 02:08:52.671231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.795 [2024-07-26 02:08:52.671645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.795 [2024-07-26 02:08:52.671677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.795 [2024-07-26 02:08:52.679427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.795 [2024-07-26 02:08:52.679845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.795 [2024-07-26 02:08:52.679873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.795 [2024-07-26 02:08:52.687425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.795 [2024-07-26 02:08:52.687807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.795 [2024-07-26 02:08:52.687851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.795 [2024-07-26 02:08:52.695649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.795 [2024-07-26 02:08:52.695971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.795 [2024-07-26 02:08:52.696001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.795 [2024-07-26 02:08:52.703209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.795 [2024-07-26 02:08:52.703562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.795 [2024-07-26 02:08:52.703592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.796 [2024-07-26 02:08:52.709336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.796 [2024-07-26 02:08:52.709634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.796 [2024-07-26 02:08:52.709670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.796 [2024-07-26 02:08:52.715794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.796 [2024-07-26 02:08:52.716131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.796 [2024-07-26 02:08:52.716161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.796 [2024-07-26 02:08:52.722224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.796 [2024-07-26 02:08:52.722555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.796 [2024-07-26 02:08:52.722589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.796 [2024-07-26 02:08:52.729580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.796 [2024-07-26 02:08:52.729980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.796 [2024-07-26 02:08:52.730024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.796 [2024-07-26 02:08:52.736339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.796 [2024-07-26 02:08:52.736641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.796 [2024-07-26 02:08:52.736671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.796 [2024-07-26 02:08:52.743020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.796 [2024-07-26 02:08:52.743323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.796 [2024-07-26 02:08:52.743356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.796 [2024-07-26 02:08:52.749590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.796 [2024-07-26 02:08:52.749875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.796 [2024-07-26 02:08:52.749905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.796 [2024-07-26 02:08:52.756002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.796 [2024-07-26 02:08:52.756307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.796 [2024-07-26 02:08:52.756337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.797 [2024-07-26 02:08:52.762430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.797 [2024-07-26 02:08:52.762732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.797 [2024-07-26 02:08:52.762761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.797 [2024-07-26 02:08:52.769105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.797 [2024-07-26 02:08:52.769410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.797 [2024-07-26 02:08:52.769439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.797 [2024-07-26 02:08:52.775712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.797 [2024-07-26 02:08:52.776065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.797 [2024-07-26 02:08:52.776095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.797 [2024-07-26 02:08:52.782089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.797 [2024-07-26 02:08:52.782361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.797 [2024-07-26 02:08:52.782399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.797 [2024-07-26 02:08:52.788544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.797 [2024-07-26 02:08:52.788821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.797 [2024-07-26 02:08:52.788855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.797 [2024-07-26 02:08:52.795146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.797 [2024-07-26 02:08:52.795426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.797 [2024-07-26 02:08:52.795456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.797 [2024-07-26 02:08:52.801606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:10.797 [2024-07-26 02:08:52.801878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.797 [2024-07-26 02:08:52.801907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.062 [2024-07-26 02:08:52.808084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.062 [2024-07-26 02:08:52.808362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.062 [2024-07-26 02:08:52.808392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.062 [2024-07-26 02:08:52.814541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.062 [2024-07-26 02:08:52.814827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.062 [2024-07-26 02:08:52.814858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.062 [2024-07-26 02:08:52.821309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.821631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.821665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.827698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.827987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.828016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.834372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.834660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.834709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.840906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.841203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.841233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.847179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.847534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.847567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.853936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.854227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.854257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.860418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.860731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.860764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.867304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.867588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.867617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.873763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.874109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.874144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.880585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.880884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.880919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.887340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.887650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.887679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.893639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.893966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.893996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.900032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.900391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.900420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.907286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.907680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.907710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.915504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.915895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.915937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.923900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.924249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.924280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.930991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.931347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.931376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.939284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.939654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.939696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.947361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.947763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.947806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.955795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.956225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.956255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.963863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.964299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.964348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.972173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.972528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.972557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.980024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.980368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.980398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.987987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.988341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.988371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:52.995708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:52.996053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:52.996093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:53.003469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.063 [2024-07-26 02:08:53.003918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.063 [2024-07-26 02:08:53.003960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.063 [2024-07-26 02:08:53.011636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.064 [2024-07-26 02:08:53.012082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.064 [2024-07-26 02:08:53.012124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.064 [2024-07-26 02:08:53.019640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.064 [2024-07-26 02:08:53.020026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.064 [2024-07-26 02:08:53.020084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.064 [2024-07-26 02:08:53.027604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.064 [2024-07-26 02:08:53.027978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.064 [2024-07-26 02:08:53.028022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.064 [2024-07-26 02:08:53.035519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.064 [2024-07-26 02:08:53.035875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.064 [2024-07-26 02:08:53.035905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.064 [2024-07-26 02:08:53.043311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.064 [2024-07-26 02:08:53.043729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.064 [2024-07-26 02:08:53.043757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.064 [2024-07-26 02:08:53.051680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.064 [2024-07-26 02:08:53.052033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.064 [2024-07-26 02:08:53.052071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.064 [2024-07-26 02:08:53.059798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.064 [2024-07-26 02:08:53.060139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.064 [2024-07-26 02:08:53.060169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.064 [2024-07-26 02:08:53.068323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.064 [2024-07-26 02:08:53.068762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.064 [2024-07-26 02:08:53.068794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:11.321 [2024-07-26 02:08:53.076646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.321 [2024-07-26 02:08:53.077037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.321 [2024-07-26 02:08:53.077094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:11.321 [2024-07-26 02:08:53.084609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.321 [2024-07-26 02:08:53.085036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.321 [2024-07-26 02:08:53.085072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:11.321 [2024-07-26 02:08:53.092812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc11940) with pdu=0x2000190fef90 00:33:11.321 [2024-07-26 02:08:53.093221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.321 [2024-07-26 02:08:53.093252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:11.321 00:33:11.321 Latency(us) 00:33:11.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.321 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:11.321 nvme0n1 : 2.00 4013.67 501.71 0.00 0.00 3976.54 2682.12 9660.49 00:33:11.321 =================================================================================================================== 00:33:11.321 Total : 4013.67 501.71 0.00 0.00 3976.54 2682.12 9660.49 00:33:11.321 0 00:33:11.321 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:11.321 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:11.321 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:11.321 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:11.321 | .driver_specific 00:33:11.321 | .nvme_error 00:33:11.321 | .status_code 00:33:11.321 | .command_transient_transport_error' 00:33:11.579 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 259 > 0 )) 00:33:11.579 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2414519 00:33:11.579 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2414519 ']' 00:33:11.579 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2414519 00:33:11.579 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:11.579 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:11.579 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2414519 00:33:11.579 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:11.579 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:11.579 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2414519' 00:33:11.579 killing process with pid 2414519 00:33:11.579 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2414519 00:33:11.579 Received shutdown signal, test time was about 2.000000 seconds 00:33:11.579 00:33:11.579 Latency(us) 00:33:11.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.579 =================================================================================================================== 00:33:11.579 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:11.579 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2414519 00:33:11.838 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2413154 00:33:11.838 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2413154 ']' 00:33:11.838 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2413154 00:33:11.838 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:33:11.838 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:11.838 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2413154 00:33:11.838 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:11.838 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:11.838 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2413154' 00:33:11.838 killing process with pid 2413154 00:33:11.838 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2413154 00:33:11.838 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2413154 00:33:12.096 00:33:12.096 real 0m15.252s 00:33:12.096 user 0m30.153s 00:33:12.096 sys 0m4.219s 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.096 ************************************ 00:33:12.096 END TEST nvmf_digest_error 00:33:12.096 ************************************ 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:12.096 rmmod nvme_tcp 00:33:12.096 rmmod nvme_fabrics 00:33:12.096 rmmod nvme_keyring 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2413154 ']' 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2413154 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2413154 ']' 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2413154 00:33:12.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2413154) - No such process 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2413154 is not found' 00:33:12.096 Process with pid 2413154 is not found 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:12.096 02:08:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.003 02:08:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:14.003 00:33:14.003 real 0m34.989s 00:33:14.003 user 1m1.183s 00:33:14.003 sys 0m10.059s 00:33:14.003 02:08:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:14.003 02:08:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:14.003 ************************************ 00:33:14.003 END TEST nvmf_digest 00:33:14.003 ************************************ 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.261 ************************************ 00:33:14.261 START TEST nvmf_bdevperf 00:33:14.261 ************************************ 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:14.261 * Looking for test storage... 00:33:14.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:14.261 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:14.262 02:08:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:16.163 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:16.163 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:16.163 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:16.163 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:16.164 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:16.164 02:08:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:16.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:33:16.164 00:33:16.164 --- 10.0.0.2 ping statistics --- 00:33:16.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.164 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:16.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:33:16.164 00:33:16.164 --- 10.0.0.1 ping statistics --- 00:33:16.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.164 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2416875 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2416875 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2416875 ']' 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:16.164 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.164 [2024-07-26 02:08:58.152974] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:33:16.164 [2024-07-26 02:08:58.153080] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.424 EAL: No free 2048 kB hugepages reported on node 1 00:33:16.424 [2024-07-26 02:08:58.228194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:16.424 [2024-07-26 02:08:58.323153] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.424 [2024-07-26 02:08:58.323208] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.424 [2024-07-26 02:08:58.323242] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.424 [2024-07-26 02:08:58.323255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.424 [2024-07-26 02:08:58.323267] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.424 [2024-07-26 02:08:58.323341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.424 [2024-07-26 02:08:58.323389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:16.424 [2024-07-26 02:08:58.323392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.684 [2024-07-26 02:08:58.471400] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.684 Malloc0 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.684 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:16.685 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.685 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:16.685 [2024-07-26 02:08:58.532815] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:16.685 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.685 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:16.685 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:16.685 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:16.685 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:16.685 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:16.685 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:16.685 { 00:33:16.685 "params": { 00:33:16.685 "name": "Nvme$subsystem", 00:33:16.685 "trtype": "$TEST_TRANSPORT", 00:33:16.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:16.685 "adrfam": "ipv4", 00:33:16.685 "trsvcid": "$NVMF_PORT", 00:33:16.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:16.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:16.685 "hdgst": ${hdgst:-false}, 00:33:16.685 "ddgst": ${ddgst:-false} 00:33:16.685 }, 00:33:16.685 "method": "bdev_nvme_attach_controller" 00:33:16.685 } 00:33:16.685 EOF 00:33:16.685 )") 00:33:16.685 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:16.685 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:16.685 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:16.685 02:08:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:16.685 "params": { 00:33:16.685 "name": "Nvme1", 00:33:16.685 "trtype": "tcp", 00:33:16.685 "traddr": "10.0.0.2", 00:33:16.685 "adrfam": "ipv4", 00:33:16.685 "trsvcid": "4420", 00:33:16.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:16.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:16.685 "hdgst": false, 00:33:16.685 "ddgst": false 00:33:16.685 }, 00:33:16.685 "method": "bdev_nvme_attach_controller" 00:33:16.685 }' 00:33:16.685 [2024-07-26 02:08:58.579588] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:33:16.685 [2024-07-26 02:08:58.579673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2417008 ] 00:33:16.685 EAL: No free 2048 kB hugepages reported on node 1 00:33:16.685 [2024-07-26 02:08:58.640571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.945 [2024-07-26 02:08:58.732009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.945 Running I/O for 1 seconds... 00:33:18.322 00:33:18.322 Latency(us) 00:33:18.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.322 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:18.322 Verification LBA range: start 0x0 length 0x4000 00:33:18.322 Nvme1n1 : 1.01 8795.46 34.36 0.00 0.00 14496.68 2924.85 14854.83 00:33:18.322 =================================================================================================================== 00:33:18.322 Total : 8795.46 34.36 0.00 0.00 14496.68 2924.85 14854.83 00:33:18.322 02:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2417167 00:33:18.322 02:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:18.322 02:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:18.322 02:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:18.322 02:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:18.322 02:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:18.322 02:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:18.322 02:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:18.322 { 00:33:18.322 "params": { 00:33:18.322 "name": "Nvme$subsystem", 00:33:18.322 "trtype": "$TEST_TRANSPORT", 00:33:18.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.322 "adrfam": "ipv4", 00:33:18.322 "trsvcid": "$NVMF_PORT", 00:33:18.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.322 "hdgst": ${hdgst:-false}, 00:33:18.322 "ddgst": ${ddgst:-false} 00:33:18.322 }, 00:33:18.322 "method": "bdev_nvme_attach_controller" 00:33:18.322 } 00:33:18.322 EOF 00:33:18.322 )") 00:33:18.322 02:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:18.322 02:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:18.322 02:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:18.322 02:09:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:18.322 "params": { 00:33:18.322 "name": "Nvme1", 00:33:18.322 "trtype": "tcp", 00:33:18.322 "traddr": "10.0.0.2", 00:33:18.322 "adrfam": "ipv4", 00:33:18.322 "trsvcid": "4420", 00:33:18.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:18.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:18.322 "hdgst": false, 00:33:18.322 "ddgst": false 00:33:18.322 }, 00:33:18.322 "method": "bdev_nvme_attach_controller" 00:33:18.322 }' 00:33:18.322 [2024-07-26 02:09:00.197869] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:33:18.322 [2024-07-26 02:09:00.197948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2417167 ] 00:33:18.322 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.322 [2024-07-26 02:09:00.261767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.580 [2024-07-26 02:09:00.348024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.580 Running I/O for 15 seconds... 00:33:21.866 02:09:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2416875 00:33:21.866 02:09:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:21.866 [2024-07-26 02:09:03.167582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.866 [2024-07-26 02:09:03.167646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.167678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.866 [2024-07-26 02:09:03.167698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.167717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.866 [2024-07-26 02:09:03.167733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.167751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.866 [2024-07-26 02:09:03.167767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.167784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.866 [2024-07-26 02:09:03.167801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.167818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.866 [2024-07-26 02:09:03.167834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.167852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.866 [2024-07-26 02:09:03.167867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.167884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.866 [2024-07-26 02:09:03.167900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.167917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.866 [2024-07-26 02:09:03.167933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.167950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.866 [2024-07-26 02:09:03.167975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.167993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.866 [2024-07-26 02:09:03.168009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.866 [2024-07-26 02:09:03.168048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.866 [2024-07-26 02:09:03.168096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.866 [2024-07-26 02:09:03.168141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.866 [2024-07-26 02:09:03.168171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.866 [2024-07-26 02:09:03.168201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.866 [2024-07-26 02:09:03.168811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.866 [2024-07-26 02:09:03.168828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.867 [2024-07-26 02:09:03.168847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.168864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.867 [2024-07-26 02:09:03.168880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.168897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.867 [2024-07-26 02:09:03.168911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.168928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.867 [2024-07-26 02:09:03.168942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.168959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.867 [2024-07-26 02:09:03.168973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.168990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.169983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.169999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.170014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.170030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.170055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.170082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.867 [2024-07-26 02:09:03.170101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.867 [2024-07-26 02:09:03.170133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.868 [2024-07-26 02:09:03.170797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.868 [2024-07-26 02:09:03.170827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.170976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.170991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.171008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.171022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.171038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.171054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.171079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.171095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.171127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.171141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.171156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.171169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.171184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.171197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.171212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.171225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.171240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.171253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.171267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.171280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.171299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.171313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.171328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.868 [2024-07-26 02:09:03.171364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.868 [2024-07-26 02:09:03.171379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.869 [2024-07-26 02:09:03.171391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.171404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.869 [2024-07-26 02:09:03.171432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.171449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.869 [2024-07-26 02:09:03.171465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.171482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.869 [2024-07-26 02:09:03.171496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.171513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.869 [2024-07-26 02:09:03.171527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.171543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.869 [2024-07-26 02:09:03.171558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.171575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.869 [2024-07-26 02:09:03.171589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.171605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.869 [2024-07-26 02:09:03.171620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.171636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.869 [2024-07-26 02:09:03.171651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.171667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.869 [2024-07-26 02:09:03.171681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.171698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.869 [2024-07-26 02:09:03.171717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.171734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.869 [2024-07-26 02:09:03.171749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.171765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.869 [2024-07-26 02:09:03.171779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.171796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.869 [2024-07-26 02:09:03.171810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.171827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.869 [2024-07-26 02:09:03.171841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.171858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221d390 is same with the state(5) to be set 00:33:21.869 [2024-07-26 02:09:03.171876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:21.869 [2024-07-26 02:09:03.171889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:21.869 [2024-07-26 02:09:03.171902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52744 len:8 PRP1 0x0 PRP2 0x0 00:33:21.869 [2024-07-26 02:09:03.171916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.171980] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x221d390 was disconnected and freed. reset controller. 00:33:21.869 [2024-07-26 02:09:03.172071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.869 [2024-07-26 02:09:03.172095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.172113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.869 [2024-07-26 02:09:03.172144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.172158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.869 [2024-07-26 02:09:03.172171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.172185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.869 [2024-07-26 02:09:03.172198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.869 [2024-07-26 02:09:03.172211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.869 [2024-07-26 02:09:03.176022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.869 [2024-07-26 02:09:03.176069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.869 [2024-07-26 02:09:03.176746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.869 [2024-07-26 02:09:03.176783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.869 [2024-07-26 02:09:03.176802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.869 [2024-07-26 02:09:03.177043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.869 [2024-07-26 02:09:03.177302] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.869 [2024-07-26 02:09:03.177326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.869 [2024-07-26 02:09:03.177345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.869 [2024-07-26 02:09:03.180927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.869 [2024-07-26 02:09:03.190208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.869 [2024-07-26 02:09:03.190695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.869 [2024-07-26 02:09:03.190737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.869 [2024-07-26 02:09:03.190754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.869 [2024-07-26 02:09:03.191013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.869 [2024-07-26 02:09:03.191265] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.869 [2024-07-26 02:09:03.191289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.869 [2024-07-26 02:09:03.191305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.869 [2024-07-26 02:09:03.194877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.869 [2024-07-26 02:09:03.204158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.869 [2024-07-26 02:09:03.204552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.869 [2024-07-26 02:09:03.204583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.869 [2024-07-26 02:09:03.204601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.869 [2024-07-26 02:09:03.204839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.869 [2024-07-26 02:09:03.205093] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.869 [2024-07-26 02:09:03.205117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.869 [2024-07-26 02:09:03.205132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.869 [2024-07-26 02:09:03.208711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.869 [2024-07-26 02:09:03.218200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.869 [2024-07-26 02:09:03.218634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.869 [2024-07-26 02:09:03.218665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.869 [2024-07-26 02:09:03.218682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.869 [2024-07-26 02:09:03.218921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.869 [2024-07-26 02:09:03.219180] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.869 [2024-07-26 02:09:03.219216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.869 [2024-07-26 02:09:03.219233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.869 [2024-07-26 02:09:03.222803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.869 [2024-07-26 02:09:03.232085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.869 [2024-07-26 02:09:03.232499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.869 [2024-07-26 02:09:03.232530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.870 [2024-07-26 02:09:03.232548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.870 [2024-07-26 02:09:03.232785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.870 [2024-07-26 02:09:03.233028] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.870 [2024-07-26 02:09:03.233050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.870 [2024-07-26 02:09:03.233076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.870 [2024-07-26 02:09:03.236658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.870 [2024-07-26 02:09:03.246107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.870 [2024-07-26 02:09:03.246539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.870 [2024-07-26 02:09:03.246582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.870 [2024-07-26 02:09:03.246598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.870 [2024-07-26 02:09:03.246865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.870 [2024-07-26 02:09:03.247120] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.870 [2024-07-26 02:09:03.247143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.870 [2024-07-26 02:09:03.247159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.870 [2024-07-26 02:09:03.250724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.870 [2024-07-26 02:09:03.259994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.870 [2024-07-26 02:09:03.260400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.870 [2024-07-26 02:09:03.260442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.870 [2024-07-26 02:09:03.260457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.870 [2024-07-26 02:09:03.260710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.870 [2024-07-26 02:09:03.260953] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.870 [2024-07-26 02:09:03.260976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.870 [2024-07-26 02:09:03.260991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.870 [2024-07-26 02:09:03.264590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.870 [2024-07-26 02:09:03.273860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.870 [2024-07-26 02:09:03.274308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.870 [2024-07-26 02:09:03.274336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.870 [2024-07-26 02:09:03.274352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.870 [2024-07-26 02:09:03.274604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.870 [2024-07-26 02:09:03.274846] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.870 [2024-07-26 02:09:03.274869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.870 [2024-07-26 02:09:03.274885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.870 [2024-07-26 02:09:03.278473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.870 [2024-07-26 02:09:03.287747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.870 [2024-07-26 02:09:03.288140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.870 [2024-07-26 02:09:03.288171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.870 [2024-07-26 02:09:03.288189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.870 [2024-07-26 02:09:03.288427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.870 [2024-07-26 02:09:03.288668] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.870 [2024-07-26 02:09:03.288692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.870 [2024-07-26 02:09:03.288707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.870 [2024-07-26 02:09:03.292286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.870 [2024-07-26 02:09:03.301774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.870 [2024-07-26 02:09:03.302167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.870 [2024-07-26 02:09:03.302199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.870 [2024-07-26 02:09:03.302217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.870 [2024-07-26 02:09:03.302455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.870 [2024-07-26 02:09:03.302697] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.870 [2024-07-26 02:09:03.302720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.870 [2024-07-26 02:09:03.302735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.871 [2024-07-26 02:09:03.306314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.871 [2024-07-26 02:09:03.315625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.871 [2024-07-26 02:09:03.316000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.871 [2024-07-26 02:09:03.316032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.871 [2024-07-26 02:09:03.316055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.871 [2024-07-26 02:09:03.316306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.871 [2024-07-26 02:09:03.316548] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.871 [2024-07-26 02:09:03.316571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.871 [2024-07-26 02:09:03.316587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.871 [2024-07-26 02:09:03.320162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.871 [2024-07-26 02:09:03.329640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.871 [2024-07-26 02:09:03.330064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.871 [2024-07-26 02:09:03.330095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.871 [2024-07-26 02:09:03.330112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.871 [2024-07-26 02:09:03.330351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.871 [2024-07-26 02:09:03.330593] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.871 [2024-07-26 02:09:03.330617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.871 [2024-07-26 02:09:03.330631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.871 [2024-07-26 02:09:03.334207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.871 [2024-07-26 02:09:03.343473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.871 [2024-07-26 02:09:03.343885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.871 [2024-07-26 02:09:03.343916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.871 [2024-07-26 02:09:03.343933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.871 [2024-07-26 02:09:03.344183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.871 [2024-07-26 02:09:03.344426] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.871 [2024-07-26 02:09:03.344449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.871 [2024-07-26 02:09:03.344465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.871 [2024-07-26 02:09:03.348008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.871 [2024-07-26 02:09:03.357496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.871 [2024-07-26 02:09:03.357893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.871 [2024-07-26 02:09:03.357924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.871 [2024-07-26 02:09:03.357942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.871 [2024-07-26 02:09:03.358191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.871 [2024-07-26 02:09:03.358434] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.871 [2024-07-26 02:09:03.358462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.871 [2024-07-26 02:09:03.358478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.871 [2024-07-26 02:09:03.362046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.871 [2024-07-26 02:09:03.371340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.871 [2024-07-26 02:09:03.371729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.871 [2024-07-26 02:09:03.371760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.871 [2024-07-26 02:09:03.371778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.871 [2024-07-26 02:09:03.372016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.871 [2024-07-26 02:09:03.372301] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.871 [2024-07-26 02:09:03.372325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.871 [2024-07-26 02:09:03.372341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.871 [2024-07-26 02:09:03.375906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.871 [2024-07-26 02:09:03.385183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.871 [2024-07-26 02:09:03.385601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.871 [2024-07-26 02:09:03.385629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.871 [2024-07-26 02:09:03.385644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.871 [2024-07-26 02:09:03.385884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.871 [2024-07-26 02:09:03.386138] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.871 [2024-07-26 02:09:03.386162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.871 [2024-07-26 02:09:03.386178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.871 [2024-07-26 02:09:03.389745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.872 [2024-07-26 02:09:03.399007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.872 [2024-07-26 02:09:03.399391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.872 [2024-07-26 02:09:03.399417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.872 [2024-07-26 02:09:03.399432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.872 [2024-07-26 02:09:03.399654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.872 [2024-07-26 02:09:03.399911] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.872 [2024-07-26 02:09:03.399934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.872 [2024-07-26 02:09:03.399950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.872 [2024-07-26 02:09:03.403528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.872 [2024-07-26 02:09:03.413013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.872 [2024-07-26 02:09:03.413429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.872 [2024-07-26 02:09:03.413461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.872 [2024-07-26 02:09:03.413478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.872 [2024-07-26 02:09:03.413716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.872 [2024-07-26 02:09:03.413958] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.872 [2024-07-26 02:09:03.413981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.872 [2024-07-26 02:09:03.413996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.872 [2024-07-26 02:09:03.417577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.872 [2024-07-26 02:09:03.426845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.872 [2024-07-26 02:09:03.427270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.872 [2024-07-26 02:09:03.427298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.872 [2024-07-26 02:09:03.427313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.872 [2024-07-26 02:09:03.427564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.872 [2024-07-26 02:09:03.427806] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.872 [2024-07-26 02:09:03.427829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.872 [2024-07-26 02:09:03.427844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.872 [2024-07-26 02:09:03.431425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.872 [2024-07-26 02:09:03.440703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.872 [2024-07-26 02:09:03.441117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.872 [2024-07-26 02:09:03.441149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.872 [2024-07-26 02:09:03.441166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.872 [2024-07-26 02:09:03.441404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.872 [2024-07-26 02:09:03.441647] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.872 [2024-07-26 02:09:03.441670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.872 [2024-07-26 02:09:03.441685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.872 [2024-07-26 02:09:03.445263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.872 [2024-07-26 02:09:03.454741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.872 [2024-07-26 02:09:03.455170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.872 [2024-07-26 02:09:03.455198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.872 [2024-07-26 02:09:03.455214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.872 [2024-07-26 02:09:03.455468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.872 [2024-07-26 02:09:03.455712] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.872 [2024-07-26 02:09:03.455735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.872 [2024-07-26 02:09:03.455750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.872 [2024-07-26 02:09:03.459342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.872 [2024-07-26 02:09:03.468621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.872 [2024-07-26 02:09:03.469008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.872 [2024-07-26 02:09:03.469039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.872 [2024-07-26 02:09:03.469056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.872 [2024-07-26 02:09:03.469315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.872 [2024-07-26 02:09:03.469557] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.872 [2024-07-26 02:09:03.469580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.872 [2024-07-26 02:09:03.469595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.872 [2024-07-26 02:09:03.473173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.872 [2024-07-26 02:09:03.482650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.872 [2024-07-26 02:09:03.483073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.872 [2024-07-26 02:09:03.483104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.872 [2024-07-26 02:09:03.483121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.872 [2024-07-26 02:09:03.483359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.872 [2024-07-26 02:09:03.483601] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.872 [2024-07-26 02:09:03.483624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.872 [2024-07-26 02:09:03.483639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.872 [2024-07-26 02:09:03.487218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.872 [2024-07-26 02:09:03.496486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.872 [2024-07-26 02:09:03.496871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.872 [2024-07-26 02:09:03.496902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.872 [2024-07-26 02:09:03.496920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.872 [2024-07-26 02:09:03.497169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.872 [2024-07-26 02:09:03.497412] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.872 [2024-07-26 02:09:03.497435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.872 [2024-07-26 02:09:03.497457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.872 [2024-07-26 02:09:03.501029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.872 [2024-07-26 02:09:03.510512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.872 [2024-07-26 02:09:03.510897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.872 [2024-07-26 02:09:03.510927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.872 [2024-07-26 02:09:03.510945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.872 [2024-07-26 02:09:03.511194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.872 [2024-07-26 02:09:03.511436] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.872 [2024-07-26 02:09:03.511459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.872 [2024-07-26 02:09:03.511474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.872 [2024-07-26 02:09:03.515052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.872 [2024-07-26 02:09:03.524534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.872 [2024-07-26 02:09:03.524954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.872 [2024-07-26 02:09:03.524996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.872 [2024-07-26 02:09:03.525011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.872 [2024-07-26 02:09:03.525288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.872 [2024-07-26 02:09:03.525531] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.872 [2024-07-26 02:09:03.525554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.872 [2024-07-26 02:09:03.525569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.873 [2024-07-26 02:09:03.529145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.873 [2024-07-26 02:09:03.538414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.873 [2024-07-26 02:09:03.538823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.873 [2024-07-26 02:09:03.538853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.873 [2024-07-26 02:09:03.538871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.873 [2024-07-26 02:09:03.539120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.873 [2024-07-26 02:09:03.539363] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.873 [2024-07-26 02:09:03.539386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.873 [2024-07-26 02:09:03.539401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.873 [2024-07-26 02:09:03.542969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.873 [2024-07-26 02:09:03.552451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.873 [2024-07-26 02:09:03.552877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.873 [2024-07-26 02:09:03.552907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.873 [2024-07-26 02:09:03.552924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.873 [2024-07-26 02:09:03.553174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.873 [2024-07-26 02:09:03.553417] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.873 [2024-07-26 02:09:03.553440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.873 [2024-07-26 02:09:03.553455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.873 [2024-07-26 02:09:03.557014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.873 [2024-07-26 02:09:03.566527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.873 [2024-07-26 02:09:03.566963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.873 [2024-07-26 02:09:03.566990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.873 [2024-07-26 02:09:03.567020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.873 [2024-07-26 02:09:03.567271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.873 [2024-07-26 02:09:03.567514] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.873 [2024-07-26 02:09:03.567537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.873 [2024-07-26 02:09:03.567552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.873 [2024-07-26 02:09:03.571139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.873 [2024-07-26 02:09:03.580407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.873 [2024-07-26 02:09:03.580831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.873 [2024-07-26 02:09:03.580873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.873 [2024-07-26 02:09:03.580888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.873 [2024-07-26 02:09:03.581169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.873 [2024-07-26 02:09:03.581412] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.873 [2024-07-26 02:09:03.581435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.873 [2024-07-26 02:09:03.581450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.873 [2024-07-26 02:09:03.585022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.873 [2024-07-26 02:09:03.594298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.873 [2024-07-26 02:09:03.594684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.873 [2024-07-26 02:09:03.594715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.873 [2024-07-26 02:09:03.594732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.873 [2024-07-26 02:09:03.594970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.873 [2024-07-26 02:09:03.595230] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.873 [2024-07-26 02:09:03.595254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.873 [2024-07-26 02:09:03.595269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.873 [2024-07-26 02:09:03.598847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.873 [2024-07-26 02:09:03.608334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.873 [2024-07-26 02:09:03.608720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.873 [2024-07-26 02:09:03.608752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.873 [2024-07-26 02:09:03.608769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.873 [2024-07-26 02:09:03.609007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.873 [2024-07-26 02:09:03.609261] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.873 [2024-07-26 02:09:03.609285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.873 [2024-07-26 02:09:03.609300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.873 [2024-07-26 02:09:03.612868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.873 [2024-07-26 02:09:03.622357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.873 [2024-07-26 02:09:03.622757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.873 [2024-07-26 02:09:03.622785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.873 [2024-07-26 02:09:03.622800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.873 [2024-07-26 02:09:03.623038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.873 [2024-07-26 02:09:03.623292] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.873 [2024-07-26 02:09:03.623315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.873 [2024-07-26 02:09:03.623330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.873 [2024-07-26 02:09:03.626897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.873 [2024-07-26 02:09:03.636379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.873 [2024-07-26 02:09:03.636778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.873 [2024-07-26 02:09:03.636809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.873 [2024-07-26 02:09:03.636826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.873 [2024-07-26 02:09:03.637074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.873 [2024-07-26 02:09:03.637324] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.873 [2024-07-26 02:09:03.637348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.873 [2024-07-26 02:09:03.637363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.873 [2024-07-26 02:09:03.640940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.873 [2024-07-26 02:09:03.650425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.873 [2024-07-26 02:09:03.650820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.873 [2024-07-26 02:09:03.650850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.873 [2024-07-26 02:09:03.650868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.873 [2024-07-26 02:09:03.651117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.873 [2024-07-26 02:09:03.651359] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.873 [2024-07-26 02:09:03.651382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.873 [2024-07-26 02:09:03.651398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.873 [2024-07-26 02:09:03.654965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.873 [2024-07-26 02:09:03.664444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.873 [2024-07-26 02:09:03.664843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.873 [2024-07-26 02:09:03.664871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.873 [2024-07-26 02:09:03.664886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.873 [2024-07-26 02:09:03.665136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.873 [2024-07-26 02:09:03.665379] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.873 [2024-07-26 02:09:03.665402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.873 [2024-07-26 02:09:03.665417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.873 [2024-07-26 02:09:03.668991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.874 [2024-07-26 02:09:03.678482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.874 [2024-07-26 02:09:03.678926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.874 [2024-07-26 02:09:03.678953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.874 [2024-07-26 02:09:03.678969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.874 [2024-07-26 02:09:03.679216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.874 [2024-07-26 02:09:03.679459] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.874 [2024-07-26 02:09:03.679482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.874 [2024-07-26 02:09:03.679497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.874 [2024-07-26 02:09:03.683076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.874 [2024-07-26 02:09:03.692324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.874 [2024-07-26 02:09:03.692744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.874 [2024-07-26 02:09:03.692787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.874 [2024-07-26 02:09:03.692807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.874 [2024-07-26 02:09:03.693084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.874 [2024-07-26 02:09:03.693315] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.874 [2024-07-26 02:09:03.693350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.874 [2024-07-26 02:09:03.693363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.874 [2024-07-26 02:09:03.696872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.874 [2024-07-26 02:09:03.706197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.874 [2024-07-26 02:09:03.706664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.874 [2024-07-26 02:09:03.706713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.874 [2024-07-26 02:09:03.706731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.874 [2024-07-26 02:09:03.706969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.874 [2024-07-26 02:09:03.707226] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.874 [2024-07-26 02:09:03.707249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.874 [2024-07-26 02:09:03.707263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.874 [2024-07-26 02:09:03.710841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.874 [2024-07-26 02:09:03.720149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.874 [2024-07-26 02:09:03.720608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.874 [2024-07-26 02:09:03.720638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.874 [2024-07-26 02:09:03.720655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.874 [2024-07-26 02:09:03.720892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.874 [2024-07-26 02:09:03.721154] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.874 [2024-07-26 02:09:03.721176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.874 [2024-07-26 02:09:03.721189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.874 [2024-07-26 02:09:03.724732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.874 [2024-07-26 02:09:03.733979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.874 [2024-07-26 02:09:03.734392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.874 [2024-07-26 02:09:03.734441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.874 [2024-07-26 02:09:03.734458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.874 [2024-07-26 02:09:03.734696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.874 [2024-07-26 02:09:03.734943] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.874 [2024-07-26 02:09:03.734967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.874 [2024-07-26 02:09:03.734982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.874 [2024-07-26 02:09:03.738563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.874 [2024-07-26 02:09:03.747858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.874 [2024-07-26 02:09:03.748273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.874 [2024-07-26 02:09:03.748305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.874 [2024-07-26 02:09:03.748322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.874 [2024-07-26 02:09:03.748560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.874 [2024-07-26 02:09:03.748803] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.874 [2024-07-26 02:09:03.748825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.874 [2024-07-26 02:09:03.748840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.874 [2024-07-26 02:09:03.752423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.874 [2024-07-26 02:09:03.761694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.874 [2024-07-26 02:09:03.762106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.874 [2024-07-26 02:09:03.762138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.874 [2024-07-26 02:09:03.762155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.874 [2024-07-26 02:09:03.762393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.874 [2024-07-26 02:09:03.762648] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.874 [2024-07-26 02:09:03.762671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.874 [2024-07-26 02:09:03.762686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.874 [2024-07-26 02:09:03.766260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.874 [2024-07-26 02:09:03.775364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.874 [2024-07-26 02:09:03.775897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.874 [2024-07-26 02:09:03.775949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.874 [2024-07-26 02:09:03.775967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.874 [2024-07-26 02:09:03.776230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.874 [2024-07-26 02:09:03.776441] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.874 [2024-07-26 02:09:03.776462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.874 [2024-07-26 02:09:03.776475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.874 [2024-07-26 02:09:03.780029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.874 [2024-07-26 02:09:03.789378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.874 [2024-07-26 02:09:03.789791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.874 [2024-07-26 02:09:03.789822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.874 [2024-07-26 02:09:03.789848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.874 [2024-07-26 02:09:03.790098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.874 [2024-07-26 02:09:03.790335] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.874 [2024-07-26 02:09:03.790383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.874 [2024-07-26 02:09:03.790398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.874 [2024-07-26 02:09:03.794006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.874 [2024-07-26 02:09:03.803339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.874 [2024-07-26 02:09:03.803792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.874 [2024-07-26 02:09:03.803823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.874 [2024-07-26 02:09:03.803840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.874 [2024-07-26 02:09:03.804088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.874 [2024-07-26 02:09:03.804323] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.874 [2024-07-26 02:09:03.804359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.874 [2024-07-26 02:09:03.804375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.874 [2024-07-26 02:09:03.807935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.875 [2024-07-26 02:09:03.817362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.875 [2024-07-26 02:09:03.817799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.875 [2024-07-26 02:09:03.817826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.875 [2024-07-26 02:09:03.817856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.875 [2024-07-26 02:09:03.818078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.875 [2024-07-26 02:09:03.818296] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.875 [2024-07-26 02:09:03.818317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.875 [2024-07-26 02:09:03.818331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.875 [2024-07-26 02:09:03.821971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.875 [2024-07-26 02:09:03.831341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.875 [2024-07-26 02:09:03.831829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.875 [2024-07-26 02:09:03.831883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.875 [2024-07-26 02:09:03.831906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.875 [2024-07-26 02:09:03.832169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.875 [2024-07-26 02:09:03.832408] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.875 [2024-07-26 02:09:03.832432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.875 [2024-07-26 02:09:03.832448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.875 [2024-07-26 02:09:03.836052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.875 [2024-07-26 02:09:03.845346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.875 [2024-07-26 02:09:03.845796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.875 [2024-07-26 02:09:03.845845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.875 [2024-07-26 02:09:03.845862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.875 [2024-07-26 02:09:03.846110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.875 [2024-07-26 02:09:03.846352] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.875 [2024-07-26 02:09:03.846375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.875 [2024-07-26 02:09:03.846391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.875 [2024-07-26 02:09:03.849962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.875 [2024-07-26 02:09:03.859242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.875 [2024-07-26 02:09:03.859711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.875 [2024-07-26 02:09:03.859737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.875 [2024-07-26 02:09:03.859767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.875 [2024-07-26 02:09:03.860013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.875 [2024-07-26 02:09:03.860265] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.875 [2024-07-26 02:09:03.860288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.875 [2024-07-26 02:09:03.860303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.875 [2024-07-26 02:09:03.863882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.875 [2024-07-26 02:09:03.873175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.875 [2024-07-26 02:09:03.873587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.875 [2024-07-26 02:09:03.873618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:21.875 [2024-07-26 02:09:03.873635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:21.875 [2024-07-26 02:09:03.873874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:21.875 [2024-07-26 02:09:03.874125] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.875 [2024-07-26 02:09:03.874154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.875 [2024-07-26 02:09:03.874170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.134 [2024-07-26 02:09:03.877740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.134 [2024-07-26 02:09:03.887018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.134 [2024-07-26 02:09:03.887465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.134 [2024-07-26 02:09:03.887514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.134 [2024-07-26 02:09:03.887532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.134 [2024-07-26 02:09:03.887769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.134 [2024-07-26 02:09:03.888012] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.134 [2024-07-26 02:09:03.888035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.134 [2024-07-26 02:09:03.888050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.134 [2024-07-26 02:09:03.891628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.134 [2024-07-26 02:09:03.900904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.134 [2024-07-26 02:09:03.901305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.134 [2024-07-26 02:09:03.901336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.134 [2024-07-26 02:09:03.901354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.134 [2024-07-26 02:09:03.901591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.134 [2024-07-26 02:09:03.901832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.134 [2024-07-26 02:09:03.901855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.135 [2024-07-26 02:09:03.901870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.135 [2024-07-26 02:09:03.905451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.135 [2024-07-26 02:09:03.914942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.135 [2024-07-26 02:09:03.915360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.135 [2024-07-26 02:09:03.915391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.135 [2024-07-26 02:09:03.915408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.135 [2024-07-26 02:09:03.915646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.135 [2024-07-26 02:09:03.915887] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.135 [2024-07-26 02:09:03.915911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.135 [2024-07-26 02:09:03.915926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.135 [2024-07-26 02:09:03.919509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.135 [2024-07-26 02:09:03.928782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.135 [2024-07-26 02:09:03.929182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.135 [2024-07-26 02:09:03.929213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.135 [2024-07-26 02:09:03.929230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.135 [2024-07-26 02:09:03.929468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.135 [2024-07-26 02:09:03.929710] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.135 [2024-07-26 02:09:03.929733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.135 [2024-07-26 02:09:03.929749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.135 [2024-07-26 02:09:03.933331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.135 [2024-07-26 02:09:03.942829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.135 [2024-07-26 02:09:03.943264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.135 [2024-07-26 02:09:03.943292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.135 [2024-07-26 02:09:03.943307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.135 [2024-07-26 02:09:03.943557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.135 [2024-07-26 02:09:03.943799] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.135 [2024-07-26 02:09:03.943823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.135 [2024-07-26 02:09:03.943838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.135 [2024-07-26 02:09:03.947483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.135 [2024-07-26 02:09:03.956779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.135 [2024-07-26 02:09:03.957180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.135 [2024-07-26 02:09:03.957211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.135 [2024-07-26 02:09:03.957228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.135 [2024-07-26 02:09:03.957467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.135 [2024-07-26 02:09:03.957709] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.135 [2024-07-26 02:09:03.957732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.135 [2024-07-26 02:09:03.957747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.135 [2024-07-26 02:09:03.961333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.135 [2024-07-26 02:09:03.970624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.135 [2024-07-26 02:09:03.971013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.135 [2024-07-26 02:09:03.971044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.135 [2024-07-26 02:09:03.971072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.135 [2024-07-26 02:09:03.971344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.135 [2024-07-26 02:09:03.971599] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.135 [2024-07-26 02:09:03.971622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.135 [2024-07-26 02:09:03.971637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.135 [2024-07-26 02:09:03.975219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.135 [2024-07-26 02:09:03.984491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.135 [2024-07-26 02:09:03.984929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.135 [2024-07-26 02:09:03.984971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.135 [2024-07-26 02:09:03.984987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.135 [2024-07-26 02:09:03.985239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.135 [2024-07-26 02:09:03.985490] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.135 [2024-07-26 02:09:03.985514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.135 [2024-07-26 02:09:03.985529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.135 [2024-07-26 02:09:03.989112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.135 [2024-07-26 02:09:03.998388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.135 [2024-07-26 02:09:03.998797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.135 [2024-07-26 02:09:03.998828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.135 [2024-07-26 02:09:03.998845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.135 [2024-07-26 02:09:03.999095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.135 [2024-07-26 02:09:03.999338] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.135 [2024-07-26 02:09:03.999362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.135 [2024-07-26 02:09:03.999377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.135 [2024-07-26 02:09:04.002950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.135 [2024-07-26 02:09:04.012241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.135 [2024-07-26 02:09:04.012636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.135 [2024-07-26 02:09:04.012678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.135 [2024-07-26 02:09:04.012693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.135 [2024-07-26 02:09:04.012951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.135 [2024-07-26 02:09:04.013205] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.135 [2024-07-26 02:09:04.013229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.135 [2024-07-26 02:09:04.013250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.135 [2024-07-26 02:09:04.016826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.135 [2024-07-26 02:09:04.026104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.135 [2024-07-26 02:09:04.026512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.135 [2024-07-26 02:09:04.026543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.135 [2024-07-26 02:09:04.026561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.135 [2024-07-26 02:09:04.026799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.135 [2024-07-26 02:09:04.027041] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.135 [2024-07-26 02:09:04.027075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.135 [2024-07-26 02:09:04.027092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.135 [2024-07-26 02:09:04.030662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.135 [2024-07-26 02:09:04.039937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.135 [2024-07-26 02:09:04.040371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.135 [2024-07-26 02:09:04.040413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.135 [2024-07-26 02:09:04.040429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.135 [2024-07-26 02:09:04.040696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.135 [2024-07-26 02:09:04.040938] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.135 [2024-07-26 02:09:04.040961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.136 [2024-07-26 02:09:04.040976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.136 [2024-07-26 02:09:04.044558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.136 [2024-07-26 02:09:04.053840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.136 [2024-07-26 02:09:04.054256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.136 [2024-07-26 02:09:04.054299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.136 [2024-07-26 02:09:04.054315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.136 [2024-07-26 02:09:04.054549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.136 [2024-07-26 02:09:04.054791] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.136 [2024-07-26 02:09:04.054814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.136 [2024-07-26 02:09:04.054829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.136 [2024-07-26 02:09:04.058413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.136 [2024-07-26 02:09:04.067707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.136 [2024-07-26 02:09:04.068095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.136 [2024-07-26 02:09:04.068131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.136 [2024-07-26 02:09:04.068150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.136 [2024-07-26 02:09:04.068388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.136 [2024-07-26 02:09:04.068630] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.136 [2024-07-26 02:09:04.068653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.136 [2024-07-26 02:09:04.068668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.136 [2024-07-26 02:09:04.072259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.136 [2024-07-26 02:09:04.081740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.136 [2024-07-26 02:09:04.082151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.136 [2024-07-26 02:09:04.082183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.136 [2024-07-26 02:09:04.082200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.136 [2024-07-26 02:09:04.082438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.136 [2024-07-26 02:09:04.082681] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.136 [2024-07-26 02:09:04.082704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.136 [2024-07-26 02:09:04.082720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.136 [2024-07-26 02:09:04.086293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.136 [2024-07-26 02:09:04.095765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.136 [2024-07-26 02:09:04.096173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.136 [2024-07-26 02:09:04.096216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.136 [2024-07-26 02:09:04.096232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.136 [2024-07-26 02:09:04.096500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.136 [2024-07-26 02:09:04.096743] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.136 [2024-07-26 02:09:04.096767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.136 [2024-07-26 02:09:04.096782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.136 [2024-07-26 02:09:04.100358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.136 [2024-07-26 02:09:04.109633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.136 [2024-07-26 02:09:04.110016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.136 [2024-07-26 02:09:04.110047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.136 [2024-07-26 02:09:04.110072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.136 [2024-07-26 02:09:04.110312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.136 [2024-07-26 02:09:04.110559] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.136 [2024-07-26 02:09:04.110583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.136 [2024-07-26 02:09:04.110598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.136 [2024-07-26 02:09:04.114174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.136 [2024-07-26 02:09:04.123668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.136 [2024-07-26 02:09:04.124077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.136 [2024-07-26 02:09:04.124109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.136 [2024-07-26 02:09:04.124126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.136 [2024-07-26 02:09:04.124364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.136 [2024-07-26 02:09:04.124606] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.136 [2024-07-26 02:09:04.124630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.136 [2024-07-26 02:09:04.124645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.136 [2024-07-26 02:09:04.128231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.136 [2024-07-26 02:09:04.137512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.136 [2024-07-26 02:09:04.137915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.136 [2024-07-26 02:09:04.137963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.136 [2024-07-26 02:09:04.137980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.136 [2024-07-26 02:09:04.138229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.136 [2024-07-26 02:09:04.138471] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.136 [2024-07-26 02:09:04.138494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.136 [2024-07-26 02:09:04.138510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.136 [2024-07-26 02:09:04.142096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.397 [2024-07-26 02:09:04.151386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.397 [2024-07-26 02:09:04.151798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.397 [2024-07-26 02:09:04.151828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.397 [2024-07-26 02:09:04.151846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.397 [2024-07-26 02:09:04.152097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.397 [2024-07-26 02:09:04.152340] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.397 [2024-07-26 02:09:04.152363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.397 [2024-07-26 02:09:04.152378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.397 [2024-07-26 02:09:04.155959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.397 [2024-07-26 02:09:04.165269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.397 [2024-07-26 02:09:04.165669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.397 [2024-07-26 02:09:04.165700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.397 [2024-07-26 02:09:04.165718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.397 [2024-07-26 02:09:04.165956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.397 [2024-07-26 02:09:04.166209] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.397 [2024-07-26 02:09:04.166233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.397 [2024-07-26 02:09:04.166248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.397 [2024-07-26 02:09:04.169832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.397 [2024-07-26 02:09:04.179137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.397 [2024-07-26 02:09:04.179534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.397 [2024-07-26 02:09:04.179566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.397 [2024-07-26 02:09:04.179584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.397 [2024-07-26 02:09:04.179821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.397 [2024-07-26 02:09:04.180073] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.397 [2024-07-26 02:09:04.180097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.397 [2024-07-26 02:09:04.180113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.397 [2024-07-26 02:09:04.183679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.397 [2024-07-26 02:09:04.193269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.397 [2024-07-26 02:09:04.193683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.397 [2024-07-26 02:09:04.193714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.397 [2024-07-26 02:09:04.193731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.397 [2024-07-26 02:09:04.193970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.397 [2024-07-26 02:09:04.194222] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.397 [2024-07-26 02:09:04.194246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.397 [2024-07-26 02:09:04.194262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.397 [2024-07-26 02:09:04.197828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.397 [2024-07-26 02:09:04.207305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.397 [2024-07-26 02:09:04.207703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.397 [2024-07-26 02:09:04.207744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.397 [2024-07-26 02:09:04.207765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.397 [2024-07-26 02:09:04.207996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.397 [2024-07-26 02:09:04.208253] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.397 [2024-07-26 02:09:04.208278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.397 [2024-07-26 02:09:04.208294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.397 [2024-07-26 02:09:04.211862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.397 [2024-07-26 02:09:04.221149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.397 [2024-07-26 02:09:04.221563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.397 [2024-07-26 02:09:04.221594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.398 [2024-07-26 02:09:04.221612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.398 [2024-07-26 02:09:04.221849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.398 [2024-07-26 02:09:04.222103] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.398 [2024-07-26 02:09:04.222127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.398 [2024-07-26 02:09:04.222142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.398 [2024-07-26 02:09:04.225718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.398 [2024-07-26 02:09:04.234998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.398 [2024-07-26 02:09:04.235424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.398 [2024-07-26 02:09:04.235455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.398 [2024-07-26 02:09:04.235473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.398 [2024-07-26 02:09:04.235710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.398 [2024-07-26 02:09:04.235952] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.398 [2024-07-26 02:09:04.235975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.398 [2024-07-26 02:09:04.235990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.398 [2024-07-26 02:09:04.239574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.398 [2024-07-26 02:09:04.248851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.398 [2024-07-26 02:09:04.249267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.398 [2024-07-26 02:09:04.249298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.398 [2024-07-26 02:09:04.249316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.398 [2024-07-26 02:09:04.249554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.398 [2024-07-26 02:09:04.249796] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.398 [2024-07-26 02:09:04.249824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.398 [2024-07-26 02:09:04.249840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.398 [2024-07-26 02:09:04.253423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.398 [2024-07-26 02:09:04.262733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.398 [2024-07-26 02:09:04.263162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.398 [2024-07-26 02:09:04.263190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.398 [2024-07-26 02:09:04.263206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.398 [2024-07-26 02:09:04.263448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.398 [2024-07-26 02:09:04.263691] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.398 [2024-07-26 02:09:04.263714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.398 [2024-07-26 02:09:04.263730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.398 [2024-07-26 02:09:04.267307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.398 [2024-07-26 02:09:04.276663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.398 [2024-07-26 02:09:04.277073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.398 [2024-07-26 02:09:04.277105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.398 [2024-07-26 02:09:04.277122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.398 [2024-07-26 02:09:04.277360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.398 [2024-07-26 02:09:04.277603] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.398 [2024-07-26 02:09:04.277626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.398 [2024-07-26 02:09:04.277641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.398 [2024-07-26 02:09:04.281220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.398 [2024-07-26 02:09:04.290697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.398 [2024-07-26 02:09:04.291122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.398 [2024-07-26 02:09:04.291165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.398 [2024-07-26 02:09:04.291181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.398 [2024-07-26 02:09:04.291447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.398 [2024-07-26 02:09:04.291689] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.398 [2024-07-26 02:09:04.291712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.398 [2024-07-26 02:09:04.291727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.398 [2024-07-26 02:09:04.295305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.398 [2024-07-26 02:09:04.304573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.398 [2024-07-26 02:09:04.304984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.398 [2024-07-26 02:09:04.305015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.398 [2024-07-26 02:09:04.305033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.398 [2024-07-26 02:09:04.305282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.398 [2024-07-26 02:09:04.305524] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.398 [2024-07-26 02:09:04.305547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.398 [2024-07-26 02:09:04.305563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.398 [2024-07-26 02:09:04.309141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.398 [2024-07-26 02:09:04.318404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.398 [2024-07-26 02:09:04.318818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.398 [2024-07-26 02:09:04.318848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.398 [2024-07-26 02:09:04.318865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.398 [2024-07-26 02:09:04.319115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.398 [2024-07-26 02:09:04.319358] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.398 [2024-07-26 02:09:04.319381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.398 [2024-07-26 02:09:04.319396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.398 [2024-07-26 02:09:04.322965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.398 [2024-07-26 02:09:04.332258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.398 [2024-07-26 02:09:04.332670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.398 [2024-07-26 02:09:04.332700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.398 [2024-07-26 02:09:04.332718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.398 [2024-07-26 02:09:04.332956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.398 [2024-07-26 02:09:04.333208] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.398 [2024-07-26 02:09:04.333232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.398 [2024-07-26 02:09:04.333247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.398 [2024-07-26 02:09:04.336818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.398 [2024-07-26 02:09:04.346094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.398 [2024-07-26 02:09:04.346504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.398 [2024-07-26 02:09:04.346534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.398 [2024-07-26 02:09:04.346560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.398 [2024-07-26 02:09:04.346799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.398 [2024-07-26 02:09:04.347041] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.398 [2024-07-26 02:09:04.347072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.398 [2024-07-26 02:09:04.347089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.398 [2024-07-26 02:09:04.350663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.398 [2024-07-26 02:09:04.359948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.398 [2024-07-26 02:09:04.360335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.398 [2024-07-26 02:09:04.360366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.398 [2024-07-26 02:09:04.360383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.399 [2024-07-26 02:09:04.360621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.399 [2024-07-26 02:09:04.360862] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.399 [2024-07-26 02:09:04.360885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.399 [2024-07-26 02:09:04.360900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.399 [2024-07-26 02:09:04.364486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.399 [2024-07-26 02:09:04.373795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.399 [2024-07-26 02:09:04.374213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.399 [2024-07-26 02:09:04.374244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.399 [2024-07-26 02:09:04.374261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.399 [2024-07-26 02:09:04.374499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.399 [2024-07-26 02:09:04.374741] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.399 [2024-07-26 02:09:04.374765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.399 [2024-07-26 02:09:04.374780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.399 [2024-07-26 02:09:04.378363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.399 [2024-07-26 02:09:04.387640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.399 [2024-07-26 02:09:04.388024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.399 [2024-07-26 02:09:04.388054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.399 [2024-07-26 02:09:04.388084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.399 [2024-07-26 02:09:04.388323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.399 [2024-07-26 02:09:04.388564] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.399 [2024-07-26 02:09:04.388592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.399 [2024-07-26 02:09:04.388609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.399 [2024-07-26 02:09:04.392194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.399 [2024-07-26 02:09:04.401679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.399 [2024-07-26 02:09:04.402075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.399 [2024-07-26 02:09:04.402106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.399 [2024-07-26 02:09:04.402123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.399 [2024-07-26 02:09:04.402361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.399 [2024-07-26 02:09:04.402603] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.399 [2024-07-26 02:09:04.402626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.399 [2024-07-26 02:09:04.402641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.399 [2024-07-26 02:09:04.406226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.660 [2024-07-26 02:09:04.415728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.660 [2024-07-26 02:09:04.416142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.660 [2024-07-26 02:09:04.416173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.660 [2024-07-26 02:09:04.416191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.660 [2024-07-26 02:09:04.416429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.660 [2024-07-26 02:09:04.416671] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.660 [2024-07-26 02:09:04.416694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.660 [2024-07-26 02:09:04.416709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.660 [2024-07-26 02:09:04.420299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.660 [2024-07-26 02:09:04.429576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.660 [2024-07-26 02:09:04.429941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.660 [2024-07-26 02:09:04.429972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.660 [2024-07-26 02:09:04.429989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.660 [2024-07-26 02:09:04.430239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.660 [2024-07-26 02:09:04.430482] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.660 [2024-07-26 02:09:04.430505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.660 [2024-07-26 02:09:04.430520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.660 [2024-07-26 02:09:04.434100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.660 [2024-07-26 02:09:04.443594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.660 [2024-07-26 02:09:04.443989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.660 [2024-07-26 02:09:04.444020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.660 [2024-07-26 02:09:04.444037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.660 [2024-07-26 02:09:04.444286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.660 [2024-07-26 02:09:04.444529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.660 [2024-07-26 02:09:04.444552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.660 [2024-07-26 02:09:04.444568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.660 [2024-07-26 02:09:04.448147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.660 [2024-07-26 02:09:04.457630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.660 [2024-07-26 02:09:04.458056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.660 [2024-07-26 02:09:04.458094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.660 [2024-07-26 02:09:04.458111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.660 [2024-07-26 02:09:04.458349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.660 [2024-07-26 02:09:04.458591] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.660 [2024-07-26 02:09:04.458614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.660 [2024-07-26 02:09:04.458629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.660 [2024-07-26 02:09:04.462211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.660 [2024-07-26 02:09:04.471502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.660 [2024-07-26 02:09:04.471891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.660 [2024-07-26 02:09:04.471922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.660 [2024-07-26 02:09:04.471939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.660 [2024-07-26 02:09:04.472190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.660 [2024-07-26 02:09:04.472433] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.660 [2024-07-26 02:09:04.472456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.660 [2024-07-26 02:09:04.472471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.660 [2024-07-26 02:09:04.476043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.660 [2024-07-26 02:09:04.485539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.660 [2024-07-26 02:09:04.485949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.660 [2024-07-26 02:09:04.485980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.660 [2024-07-26 02:09:04.485997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.660 [2024-07-26 02:09:04.486254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.660 [2024-07-26 02:09:04.486497] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.660 [2024-07-26 02:09:04.486520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.660 [2024-07-26 02:09:04.486535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.660 [2024-07-26 02:09:04.490119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.660 [2024-07-26 02:09:04.499395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.660 [2024-07-26 02:09:04.499783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.660 [2024-07-26 02:09:04.499813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.660 [2024-07-26 02:09:04.499830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.660 [2024-07-26 02:09:04.500081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.660 [2024-07-26 02:09:04.500324] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.660 [2024-07-26 02:09:04.500348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.661 [2024-07-26 02:09:04.500363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.661 [2024-07-26 02:09:04.503937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.661 [2024-07-26 02:09:04.513435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.661 [2024-07-26 02:09:04.513845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.661 [2024-07-26 02:09:04.513875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.661 [2024-07-26 02:09:04.513893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.661 [2024-07-26 02:09:04.514143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.661 [2024-07-26 02:09:04.514385] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.661 [2024-07-26 02:09:04.514408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.661 [2024-07-26 02:09:04.514423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.661 [2024-07-26 02:09:04.517997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.661 [2024-07-26 02:09:04.527283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.661 [2024-07-26 02:09:04.527692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.661 [2024-07-26 02:09:04.527723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.661 [2024-07-26 02:09:04.527740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.661 [2024-07-26 02:09:04.527978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.661 [2024-07-26 02:09:04.528231] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.661 [2024-07-26 02:09:04.528255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.661 [2024-07-26 02:09:04.528275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.661 [2024-07-26 02:09:04.531847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.661 [2024-07-26 02:09:04.541133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.661 [2024-07-26 02:09:04.541554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.661 [2024-07-26 02:09:04.541584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.661 [2024-07-26 02:09:04.541601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.661 [2024-07-26 02:09:04.541839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.661 [2024-07-26 02:09:04.542095] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.661 [2024-07-26 02:09:04.542118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.661 [2024-07-26 02:09:04.542133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.661 [2024-07-26 02:09:04.545715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.661 [2024-07-26 02:09:04.554991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.661 [2024-07-26 02:09:04.555404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.661 [2024-07-26 02:09:04.555435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.661 [2024-07-26 02:09:04.555453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.661 [2024-07-26 02:09:04.555690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.661 [2024-07-26 02:09:04.555931] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.661 [2024-07-26 02:09:04.555955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.661 [2024-07-26 02:09:04.555970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.661 [2024-07-26 02:09:04.559554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.661 [2024-07-26 02:09:04.568832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.661 [2024-07-26 02:09:04.569248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.661 [2024-07-26 02:09:04.569279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.661 [2024-07-26 02:09:04.569296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.661 [2024-07-26 02:09:04.569534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.661 [2024-07-26 02:09:04.569776] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.661 [2024-07-26 02:09:04.569799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.661 [2024-07-26 02:09:04.569814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.661 [2024-07-26 02:09:04.573398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.661 [2024-07-26 02:09:04.582670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.661 [2024-07-26 02:09:04.583140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.661 [2024-07-26 02:09:04.583177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.661 [2024-07-26 02:09:04.583195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.661 [2024-07-26 02:09:04.583433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.661 [2024-07-26 02:09:04.583675] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.661 [2024-07-26 02:09:04.583698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.661 [2024-07-26 02:09:04.583713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.661 [2024-07-26 02:09:04.587311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.661 [2024-07-26 02:09:04.596594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.661 [2024-07-26 02:09:04.597009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.661 [2024-07-26 02:09:04.597040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.661 [2024-07-26 02:09:04.597057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.661 [2024-07-26 02:09:04.597310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.661 [2024-07-26 02:09:04.597553] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.661 [2024-07-26 02:09:04.597575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.661 [2024-07-26 02:09:04.597590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.661 [2024-07-26 02:09:04.601174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.661 [2024-07-26 02:09:04.610460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.661 [2024-07-26 02:09:04.610845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.661 [2024-07-26 02:09:04.610876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.661 [2024-07-26 02:09:04.610893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.661 [2024-07-26 02:09:04.611145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.661 [2024-07-26 02:09:04.611388] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.661 [2024-07-26 02:09:04.611412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.661 [2024-07-26 02:09:04.611427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.661 [2024-07-26 02:09:04.615002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.661 [2024-07-26 02:09:04.624500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.661 [2024-07-26 02:09:04.624921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.661 [2024-07-26 02:09:04.624951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.661 [2024-07-26 02:09:04.624969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.661 [2024-07-26 02:09:04.625219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.661 [2024-07-26 02:09:04.625468] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.661 [2024-07-26 02:09:04.625491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.661 [2024-07-26 02:09:04.625507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.661 [2024-07-26 02:09:04.629091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.661 [2024-07-26 02:09:04.638368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.661 [2024-07-26 02:09:04.638779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.661 [2024-07-26 02:09:04.638810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.661 [2024-07-26 02:09:04.638827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.661 [2024-07-26 02:09:04.639077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.661 [2024-07-26 02:09:04.639320] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.661 [2024-07-26 02:09:04.639343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.661 [2024-07-26 02:09:04.639358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.662 [2024-07-26 02:09:04.642933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.662 [2024-07-26 02:09:04.652226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.662 [2024-07-26 02:09:04.652654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.662 [2024-07-26 02:09:04.652684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.662 [2024-07-26 02:09:04.652701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.662 [2024-07-26 02:09:04.652939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.662 [2024-07-26 02:09:04.653194] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.662 [2024-07-26 02:09:04.653218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.662 [2024-07-26 02:09:04.653233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.662 [2024-07-26 02:09:04.656806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.662 [2024-07-26 02:09:04.666090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.662 [2024-07-26 02:09:04.666501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.662 [2024-07-26 02:09:04.666531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.662 [2024-07-26 02:09:04.666548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.662 [2024-07-26 02:09:04.666786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.662 [2024-07-26 02:09:04.667028] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.662 [2024-07-26 02:09:04.667052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.662 [2024-07-26 02:09:04.667081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.921 [2024-07-26 02:09:04.670680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.922 [2024-07-26 02:09:04.679967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.922 [2024-07-26 02:09:04.680409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.922 [2024-07-26 02:09:04.680439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.922 [2024-07-26 02:09:04.680457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.922 [2024-07-26 02:09:04.680695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.922 [2024-07-26 02:09:04.680936] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.922 [2024-07-26 02:09:04.680959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.922 [2024-07-26 02:09:04.680975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.922 [2024-07-26 02:09:04.684557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.922 [2024-07-26 02:09:04.693832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.922 [2024-07-26 02:09:04.694253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.922 [2024-07-26 02:09:04.694283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.922 [2024-07-26 02:09:04.694301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.922 [2024-07-26 02:09:04.694538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.922 [2024-07-26 02:09:04.694780] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.922 [2024-07-26 02:09:04.694803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.922 [2024-07-26 02:09:04.694818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.922 [2024-07-26 02:09:04.698404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.922 [2024-07-26 02:09:04.707684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.922 [2024-07-26 02:09:04.708050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.922 [2024-07-26 02:09:04.708099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.922 [2024-07-26 02:09:04.708117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.922 [2024-07-26 02:09:04.708355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.922 [2024-07-26 02:09:04.708597] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.922 [2024-07-26 02:09:04.708620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.922 [2024-07-26 02:09:04.708635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.922 [2024-07-26 02:09:04.712221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.922 [2024-07-26 02:09:04.721723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.922 [2024-07-26 02:09:04.722137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.922 [2024-07-26 02:09:04.722167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.922 [2024-07-26 02:09:04.722191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.922 [2024-07-26 02:09:04.722430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.922 [2024-07-26 02:09:04.722672] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.922 [2024-07-26 02:09:04.722695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.922 [2024-07-26 02:09:04.722710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.922 [2024-07-26 02:09:04.726289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.922 [2024-07-26 02:09:04.735563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.922 [2024-07-26 02:09:04.735973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.922 [2024-07-26 02:09:04.736004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.922 [2024-07-26 02:09:04.736021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.922 [2024-07-26 02:09:04.736270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.922 [2024-07-26 02:09:04.736512] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.922 [2024-07-26 02:09:04.736536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.922 [2024-07-26 02:09:04.736551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.922 [2024-07-26 02:09:04.740126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.922 [2024-07-26 02:09:04.749599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.922 [2024-07-26 02:09:04.750012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.922 [2024-07-26 02:09:04.750042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.922 [2024-07-26 02:09:04.750069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.922 [2024-07-26 02:09:04.750309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.922 [2024-07-26 02:09:04.750552] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.922 [2024-07-26 02:09:04.750575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.922 [2024-07-26 02:09:04.750590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.922 [2024-07-26 02:09:04.754165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.922 [2024-07-26 02:09:04.763440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.922 [2024-07-26 02:09:04.763853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.922 [2024-07-26 02:09:04.763884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.922 [2024-07-26 02:09:04.763901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.922 [2024-07-26 02:09:04.764151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.922 [2024-07-26 02:09:04.764394] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.922 [2024-07-26 02:09:04.764423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.922 [2024-07-26 02:09:04.764439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.922 [2024-07-26 02:09:04.768015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.922 [2024-07-26 02:09:04.777314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.922 [2024-07-26 02:09:04.777766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.922 [2024-07-26 02:09:04.777813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.922 [2024-07-26 02:09:04.777831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.922 [2024-07-26 02:09:04.778085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.922 [2024-07-26 02:09:04.778329] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.922 [2024-07-26 02:09:04.778352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.922 [2024-07-26 02:09:04.778367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.922 [2024-07-26 02:09:04.781968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.922 [2024-07-26 02:09:04.791265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.922 [2024-07-26 02:09:04.791689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.922 [2024-07-26 02:09:04.791720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.922 [2024-07-26 02:09:04.791737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.922 [2024-07-26 02:09:04.791975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.922 [2024-07-26 02:09:04.792229] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.922 [2024-07-26 02:09:04.792254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.922 [2024-07-26 02:09:04.792269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.922 [2024-07-26 02:09:04.795841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.922 [2024-07-26 02:09:04.805121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.922 [2024-07-26 02:09:04.805533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.922 [2024-07-26 02:09:04.805564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.922 [2024-07-26 02:09:04.805581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.922 [2024-07-26 02:09:04.805819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.922 [2024-07-26 02:09:04.806073] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.922 [2024-07-26 02:09:04.806097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.922 [2024-07-26 02:09:04.806112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.922 [2024-07-26 02:09:04.809693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.923 [2024-07-26 02:09:04.818982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.923 [2024-07-26 02:09:04.819411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.923 [2024-07-26 02:09:04.819442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.923 [2024-07-26 02:09:04.819459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.923 [2024-07-26 02:09:04.819697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.923 [2024-07-26 02:09:04.819939] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.923 [2024-07-26 02:09:04.819962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.923 [2024-07-26 02:09:04.819977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.923 [2024-07-26 02:09:04.823559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.923 [2024-07-26 02:09:04.832848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.923 [2024-07-26 02:09:04.833248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.923 [2024-07-26 02:09:04.833279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.923 [2024-07-26 02:09:04.833297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.923 [2024-07-26 02:09:04.833534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.923 [2024-07-26 02:09:04.833777] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.923 [2024-07-26 02:09:04.833800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.923 [2024-07-26 02:09:04.833816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.923 [2024-07-26 02:09:04.837405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.923 [2024-07-26 02:09:04.846899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.923 [2024-07-26 02:09:04.847314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.923 [2024-07-26 02:09:04.847345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.923 [2024-07-26 02:09:04.847362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.923 [2024-07-26 02:09:04.847599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.923 [2024-07-26 02:09:04.847842] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.923 [2024-07-26 02:09:04.847865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.923 [2024-07-26 02:09:04.847880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.923 [2024-07-26 02:09:04.851459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.923 [2024-07-26 02:09:04.860734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.923 [2024-07-26 02:09:04.861144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.923 [2024-07-26 02:09:04.861175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.923 [2024-07-26 02:09:04.861192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.923 [2024-07-26 02:09:04.861436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.923 [2024-07-26 02:09:04.861678] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.923 [2024-07-26 02:09:04.861702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.923 [2024-07-26 02:09:04.861717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.923 [2024-07-26 02:09:04.865300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.923 [2024-07-26 02:09:04.874585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.923 [2024-07-26 02:09:04.874951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.923 [2024-07-26 02:09:04.874982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.923 [2024-07-26 02:09:04.874999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.923 [2024-07-26 02:09:04.875247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.923 [2024-07-26 02:09:04.875489] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.923 [2024-07-26 02:09:04.875512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.923 [2024-07-26 02:09:04.875527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.923 [2024-07-26 02:09:04.879106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.923 [2024-07-26 02:09:04.888585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.923 [2024-07-26 02:09:04.888971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.923 [2024-07-26 02:09:04.889002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.923 [2024-07-26 02:09:04.889019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.923 [2024-07-26 02:09:04.889265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.923 [2024-07-26 02:09:04.889508] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.923 [2024-07-26 02:09:04.889531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.923 [2024-07-26 02:09:04.889547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.923 [2024-07-26 02:09:04.893126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.923 [2024-07-26 02:09:04.902423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.923 [2024-07-26 02:09:04.902841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.923 [2024-07-26 02:09:04.902872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.923 [2024-07-26 02:09:04.902889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.923 [2024-07-26 02:09:04.903136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.923 [2024-07-26 02:09:04.903379] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.923 [2024-07-26 02:09:04.903402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.923 [2024-07-26 02:09:04.903422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.923 [2024-07-26 02:09:04.906991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.923 [2024-07-26 02:09:04.916278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.923 [2024-07-26 02:09:04.916747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.923 [2024-07-26 02:09:04.916795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.923 [2024-07-26 02:09:04.916812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.923 [2024-07-26 02:09:04.917050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.923 [2024-07-26 02:09:04.917303] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.923 [2024-07-26 02:09:04.917326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.923 [2024-07-26 02:09:04.917341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.923 [2024-07-26 02:09:04.920908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.923 [2024-07-26 02:09:04.930194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.923 [2024-07-26 02:09:04.930556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.923 [2024-07-26 02:09:04.930587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:22.923 [2024-07-26 02:09:04.930604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:22.923 [2024-07-26 02:09:04.930843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:22.923 [2024-07-26 02:09:04.931095] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.923 [2024-07-26 02:09:04.931119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.923 [2024-07-26 02:09:04.931134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.182 [2024-07-26 02:09:04.934703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.182 [2024-07-26 02:09:04.944198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.182 [2024-07-26 02:09:04.944630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.182 [2024-07-26 02:09:04.944679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.182 [2024-07-26 02:09:04.944696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.182 [2024-07-26 02:09:04.944934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.182 [2024-07-26 02:09:04.945184] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.182 [2024-07-26 02:09:04.945208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.182 [2024-07-26 02:09:04.945224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.182 [2024-07-26 02:09:04.948794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.182 [2024-07-26 02:09:04.958073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.182 [2024-07-26 02:09:04.958494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.183 [2024-07-26 02:09:04.958544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.183 [2024-07-26 02:09:04.958561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.183 [2024-07-26 02:09:04.958799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.183 [2024-07-26 02:09:04.959040] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.183 [2024-07-26 02:09:04.959073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.183 [2024-07-26 02:09:04.959090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.183 [2024-07-26 02:09:04.962662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.183 [2024-07-26 02:09:04.971948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.183 [2024-07-26 02:09:04.972345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.183 [2024-07-26 02:09:04.972377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.183 [2024-07-26 02:09:04.972395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.183 [2024-07-26 02:09:04.972633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.183 [2024-07-26 02:09:04.972874] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.183 [2024-07-26 02:09:04.972897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.183 [2024-07-26 02:09:04.972912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.183 [2024-07-26 02:09:04.976493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.183 [2024-07-26 02:09:04.985978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.183 [2024-07-26 02:09:04.986370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.183 [2024-07-26 02:09:04.986401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.183 [2024-07-26 02:09:04.986418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.183 [2024-07-26 02:09:04.986656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.183 [2024-07-26 02:09:04.986898] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.183 [2024-07-26 02:09:04.986921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.183 [2024-07-26 02:09:04.986936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.183 [2024-07-26 02:09:04.990517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.183 [2024-07-26 02:09:04.999997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.183 [2024-07-26 02:09:05.000402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.183 [2024-07-26 02:09:05.000433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.183 [2024-07-26 02:09:05.000451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.183 [2024-07-26 02:09:05.000694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.183 [2024-07-26 02:09:05.000936] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.183 [2024-07-26 02:09:05.000959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.183 [2024-07-26 02:09:05.000975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.183 [2024-07-26 02:09:05.004553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.183 [2024-07-26 02:09:05.014034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.183 [2024-07-26 02:09:05.014532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.183 [2024-07-26 02:09:05.014581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.183 [2024-07-26 02:09:05.014599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.183 [2024-07-26 02:09:05.014837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.183 [2024-07-26 02:09:05.015088] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.183 [2024-07-26 02:09:05.015111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.183 [2024-07-26 02:09:05.015127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.183 [2024-07-26 02:09:05.018697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.183 [2024-07-26 02:09:05.027964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.183 [2024-07-26 02:09:05.028353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.183 [2024-07-26 02:09:05.028384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.183 [2024-07-26 02:09:05.028401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.183 [2024-07-26 02:09:05.028638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.183 [2024-07-26 02:09:05.028880] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.183 [2024-07-26 02:09:05.028903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.183 [2024-07-26 02:09:05.028918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.183 [2024-07-26 02:09:05.032497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.183 [2024-07-26 02:09:05.041972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.183 [2024-07-26 02:09:05.042398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.183 [2024-07-26 02:09:05.042429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.183 [2024-07-26 02:09:05.042446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.183 [2024-07-26 02:09:05.042684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.183 [2024-07-26 02:09:05.042926] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.183 [2024-07-26 02:09:05.042949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.183 [2024-07-26 02:09:05.042972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.183 [2024-07-26 02:09:05.046554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.183 [2024-07-26 02:09:05.055823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.183 [2024-07-26 02:09:05.056239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.183 [2024-07-26 02:09:05.056269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.183 [2024-07-26 02:09:05.056287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.183 [2024-07-26 02:09:05.056525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.183 [2024-07-26 02:09:05.056766] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.183 [2024-07-26 02:09:05.056790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.183 [2024-07-26 02:09:05.056805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.183 [2024-07-26 02:09:05.060383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.183 [2024-07-26 02:09:05.069855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.183 [2024-07-26 02:09:05.070250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.183 [2024-07-26 02:09:05.070280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.183 [2024-07-26 02:09:05.070298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.183 [2024-07-26 02:09:05.070535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.183 [2024-07-26 02:09:05.070788] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.183 [2024-07-26 02:09:05.070811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.183 [2024-07-26 02:09:05.070827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.183 [2024-07-26 02:09:05.074403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.183 [2024-07-26 02:09:05.083878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.183 [2024-07-26 02:09:05.084276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.183 [2024-07-26 02:09:05.084307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.183 [2024-07-26 02:09:05.084325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.183 [2024-07-26 02:09:05.084562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.183 [2024-07-26 02:09:05.084804] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.183 [2024-07-26 02:09:05.084828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.183 [2024-07-26 02:09:05.084843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.183 [2024-07-26 02:09:05.088420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.183 [2024-07-26 02:09:05.097899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.183 [2024-07-26 02:09:05.098315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.183 [2024-07-26 02:09:05.098351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.184 [2024-07-26 02:09:05.098369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.184 [2024-07-26 02:09:05.098607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.184 [2024-07-26 02:09:05.098850] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.184 [2024-07-26 02:09:05.098873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.184 [2024-07-26 02:09:05.098888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.184 [2024-07-26 02:09:05.102465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.184 [2024-07-26 02:09:05.111738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.184 [2024-07-26 02:09:05.112142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.184 [2024-07-26 02:09:05.112173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.184 [2024-07-26 02:09:05.112190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.184 [2024-07-26 02:09:05.112428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.184 [2024-07-26 02:09:05.112669] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.184 [2024-07-26 02:09:05.112692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.184 [2024-07-26 02:09:05.112707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.184 [2024-07-26 02:09:05.116283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.184 [2024-07-26 02:09:05.125760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.184 [2024-07-26 02:09:05.126150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.184 [2024-07-26 02:09:05.126181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.184 [2024-07-26 02:09:05.126198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.184 [2024-07-26 02:09:05.126436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.184 [2024-07-26 02:09:05.126678] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.184 [2024-07-26 02:09:05.126701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.184 [2024-07-26 02:09:05.126717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.184 [2024-07-26 02:09:05.130297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.184 [2024-07-26 02:09:05.139770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.184 [2024-07-26 02:09:05.140168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.184 [2024-07-26 02:09:05.140200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.184 [2024-07-26 02:09:05.140218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.184 [2024-07-26 02:09:05.140455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.184 [2024-07-26 02:09:05.140703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.184 [2024-07-26 02:09:05.140726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.184 [2024-07-26 02:09:05.140741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.184 [2024-07-26 02:09:05.144322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.184 [2024-07-26 02:09:05.153799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.184 [2024-07-26 02:09:05.154187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.184 [2024-07-26 02:09:05.154218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.184 [2024-07-26 02:09:05.154235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.184 [2024-07-26 02:09:05.154473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.184 [2024-07-26 02:09:05.154715] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.184 [2024-07-26 02:09:05.154738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.184 [2024-07-26 02:09:05.154753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.184 [2024-07-26 02:09:05.158332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.184 [2024-07-26 02:09:05.167809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.184 [2024-07-26 02:09:05.168205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.184 [2024-07-26 02:09:05.168236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.184 [2024-07-26 02:09:05.168254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.184 [2024-07-26 02:09:05.168491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.184 [2024-07-26 02:09:05.168733] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.184 [2024-07-26 02:09:05.168756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.184 [2024-07-26 02:09:05.168771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.184 [2024-07-26 02:09:05.172367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.184 [2024-07-26 02:09:05.181842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.184 [2024-07-26 02:09:05.182261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.184 [2024-07-26 02:09:05.182292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.184 [2024-07-26 02:09:05.182309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.184 [2024-07-26 02:09:05.182547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.184 [2024-07-26 02:09:05.182789] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.184 [2024-07-26 02:09:05.182812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.184 [2024-07-26 02:09:05.182827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.184 [2024-07-26 02:09:05.186410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.467 [2024-07-26 02:09:05.195681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.467 [2024-07-26 02:09:05.196084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.467 [2024-07-26 02:09:05.196116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.467 [2024-07-26 02:09:05.196134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.467 [2024-07-26 02:09:05.196372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.467 [2024-07-26 02:09:05.196613] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.467 [2024-07-26 02:09:05.196636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.467 [2024-07-26 02:09:05.196652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.467 [2024-07-26 02:09:05.200231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.467 [2024-07-26 02:09:05.209813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.467 [2024-07-26 02:09:05.210231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.467 [2024-07-26 02:09:05.210263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.467 [2024-07-26 02:09:05.210281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.467 [2024-07-26 02:09:05.210519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.467 [2024-07-26 02:09:05.210760] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.467 [2024-07-26 02:09:05.210783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.467 [2024-07-26 02:09:05.210798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.467 [2024-07-26 02:09:05.214376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.467 [2024-07-26 02:09:05.223851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.467 [2024-07-26 02:09:05.224267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.467 [2024-07-26 02:09:05.224298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.467 [2024-07-26 02:09:05.224315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.467 [2024-07-26 02:09:05.224553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.467 [2024-07-26 02:09:05.224795] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.467 [2024-07-26 02:09:05.224818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.467 [2024-07-26 02:09:05.224833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.467 [2024-07-26 02:09:05.228514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.467 [2024-07-26 02:09:05.237785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.467 [2024-07-26 02:09:05.238183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.467 [2024-07-26 02:09:05.238214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.467 [2024-07-26 02:09:05.238238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.467 [2024-07-26 02:09:05.238477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.467 [2024-07-26 02:09:05.238719] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.467 [2024-07-26 02:09:05.238742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.467 [2024-07-26 02:09:05.238758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.467 [2024-07-26 02:09:05.242338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.467 [2024-07-26 02:09:05.251809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.467 [2024-07-26 02:09:05.252182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.467 [2024-07-26 02:09:05.252213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.467 [2024-07-26 02:09:05.252230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.467 [2024-07-26 02:09:05.252468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.467 [2024-07-26 02:09:05.252710] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.467 [2024-07-26 02:09:05.252733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.467 [2024-07-26 02:09:05.252748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.467 [2024-07-26 02:09:05.256327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.467 [2024-07-26 02:09:05.265801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.467 [2024-07-26 02:09:05.266173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.467 [2024-07-26 02:09:05.266205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.467 [2024-07-26 02:09:05.266222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.467 [2024-07-26 02:09:05.266461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.467 [2024-07-26 02:09:05.266703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.467 [2024-07-26 02:09:05.266726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.467 [2024-07-26 02:09:05.266742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.467 [2024-07-26 02:09:05.270321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.467 [2024-07-26 02:09:05.279809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.467 [2024-07-26 02:09:05.280225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.467 [2024-07-26 02:09:05.280257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.467 [2024-07-26 02:09:05.280274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.467 [2024-07-26 02:09:05.280512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.467 [2024-07-26 02:09:05.280754] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.467 [2024-07-26 02:09:05.280784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.467 [2024-07-26 02:09:05.280800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.467 [2024-07-26 02:09:05.284377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.467 [2024-07-26 02:09:05.293666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.467 [2024-07-26 02:09:05.294090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.467 [2024-07-26 02:09:05.294122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.467 [2024-07-26 02:09:05.294140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.467 [2024-07-26 02:09:05.294379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.467 [2024-07-26 02:09:05.294621] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.467 [2024-07-26 02:09:05.294644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.467 [2024-07-26 02:09:05.294659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.467 [2024-07-26 02:09:05.298240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.467 [2024-07-26 02:09:05.307520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.467 [2024-07-26 02:09:05.307952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.467 [2024-07-26 02:09:05.307983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.467 [2024-07-26 02:09:05.308000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.468 [2024-07-26 02:09:05.308252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.468 [2024-07-26 02:09:05.308496] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.468 [2024-07-26 02:09:05.308519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.468 [2024-07-26 02:09:05.308534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.468 [2024-07-26 02:09:05.312115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.468 [2024-07-26 02:09:05.321380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.468 [2024-07-26 02:09:05.321790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.468 [2024-07-26 02:09:05.321820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.468 [2024-07-26 02:09:05.321837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.468 [2024-07-26 02:09:05.322087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.468 [2024-07-26 02:09:05.322329] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.468 [2024-07-26 02:09:05.322352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.468 [2024-07-26 02:09:05.322367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.468 [2024-07-26 02:09:05.325935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.468 [2024-07-26 02:09:05.335218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.468 [2024-07-26 02:09:05.335627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.468 [2024-07-26 02:09:05.335658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.468 [2024-07-26 02:09:05.335675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.468 [2024-07-26 02:09:05.335912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.468 [2024-07-26 02:09:05.336168] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.468 [2024-07-26 02:09:05.336191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.468 [2024-07-26 02:09:05.336206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.468 [2024-07-26 02:09:05.339772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.468 [2024-07-26 02:09:05.349049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.468 [2024-07-26 02:09:05.349446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.468 [2024-07-26 02:09:05.349477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.468 [2024-07-26 02:09:05.349494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.468 [2024-07-26 02:09:05.349732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.468 [2024-07-26 02:09:05.349974] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.468 [2024-07-26 02:09:05.349997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.468 [2024-07-26 02:09:05.350013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.468 [2024-07-26 02:09:05.353590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.468 [2024-07-26 02:09:05.362899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.468 [2024-07-26 02:09:05.363295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.468 [2024-07-26 02:09:05.363326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.468 [2024-07-26 02:09:05.363344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.468 [2024-07-26 02:09:05.363581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.468 [2024-07-26 02:09:05.363823] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.468 [2024-07-26 02:09:05.363847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.468 [2024-07-26 02:09:05.363862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.468 [2024-07-26 02:09:05.367448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.468 [2024-07-26 02:09:05.376737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.468 [2024-07-26 02:09:05.377133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.468 [2024-07-26 02:09:05.377164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.468 [2024-07-26 02:09:05.377182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.468 [2024-07-26 02:09:05.377426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.468 [2024-07-26 02:09:05.377668] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.468 [2024-07-26 02:09:05.377696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.468 [2024-07-26 02:09:05.377713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.468 [2024-07-26 02:09:05.381288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.468 [2024-07-26 02:09:05.390784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.468 [2024-07-26 02:09:05.391216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.468 [2024-07-26 02:09:05.391247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.468 [2024-07-26 02:09:05.391264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.468 [2024-07-26 02:09:05.391502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.468 [2024-07-26 02:09:05.391744] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.468 [2024-07-26 02:09:05.391768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.468 [2024-07-26 02:09:05.391784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.468 [2024-07-26 02:09:05.395360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.468 [2024-07-26 02:09:05.404636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.468 [2024-07-26 02:09:05.405057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.468 [2024-07-26 02:09:05.405094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.468 [2024-07-26 02:09:05.405112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.468 [2024-07-26 02:09:05.405350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.468 [2024-07-26 02:09:05.405592] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.468 [2024-07-26 02:09:05.405615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.468 [2024-07-26 02:09:05.405630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.468 [2024-07-26 02:09:05.409209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.468 [2024-07-26 02:09:05.418475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.468 [2024-07-26 02:09:05.418867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.468 [2024-07-26 02:09:05.418898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.468 [2024-07-26 02:09:05.418915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.468 [2024-07-26 02:09:05.419171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.468 [2024-07-26 02:09:05.419415] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.468 [2024-07-26 02:09:05.419438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.468 [2024-07-26 02:09:05.419459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.468 [2024-07-26 02:09:05.423028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.468 [2024-07-26 02:09:05.432514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.468 [2024-07-26 02:09:05.432925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.468 [2024-07-26 02:09:05.432955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.468 [2024-07-26 02:09:05.432973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.468 [2024-07-26 02:09:05.433232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.468 [2024-07-26 02:09:05.433481] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.468 [2024-07-26 02:09:05.433504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.468 [2024-07-26 02:09:05.433520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.468 [2024-07-26 02:09:05.437099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.468 [2024-07-26 02:09:05.446381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.468 [2024-07-26 02:09:05.446741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.468 [2024-07-26 02:09:05.446772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.468 [2024-07-26 02:09:05.446789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.469 [2024-07-26 02:09:05.447027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.469 [2024-07-26 02:09:05.447277] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.469 [2024-07-26 02:09:05.447301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.469 [2024-07-26 02:09:05.447316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.469 [2024-07-26 02:09:05.450881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.736 [2024-07-26 02:09:05.460365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.736 [2024-07-26 02:09:05.460729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.736 [2024-07-26 02:09:05.460759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.736 [2024-07-26 02:09:05.460780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.736 [2024-07-26 02:09:05.461018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.736 [2024-07-26 02:09:05.461270] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.736 [2024-07-26 02:09:05.461294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.736 [2024-07-26 02:09:05.461309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.736 [2024-07-26 02:09:05.464876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.736 [2024-07-26 02:09:05.474399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.736 [2024-07-26 02:09:05.474834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.736 [2024-07-26 02:09:05.474865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.736 [2024-07-26 02:09:05.474882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.736 [2024-07-26 02:09:05.475128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.736 [2024-07-26 02:09:05.475371] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.736 [2024-07-26 02:09:05.475395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.736 [2024-07-26 02:09:05.475410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.736 [2024-07-26 02:09:05.478976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.736 [2024-07-26 02:09:05.488246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.736 [2024-07-26 02:09:05.488821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.736 [2024-07-26 02:09:05.488854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.736 [2024-07-26 02:09:05.488872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.736 [2024-07-26 02:09:05.489122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.736 [2024-07-26 02:09:05.489366] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.736 [2024-07-26 02:09:05.489390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.736 [2024-07-26 02:09:05.489405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.736 [2024-07-26 02:09:05.492979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.736 [2024-07-26 02:09:05.502262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.736 [2024-07-26 02:09:05.502687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.736 [2024-07-26 02:09:05.502718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.736 [2024-07-26 02:09:05.502736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.736 [2024-07-26 02:09:05.502974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.736 [2024-07-26 02:09:05.503225] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.736 [2024-07-26 02:09:05.503249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.736 [2024-07-26 02:09:05.503265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.736 [2024-07-26 02:09:05.506831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.736 [2024-07-26 02:09:05.516122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.736 [2024-07-26 02:09:05.516530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.736 [2024-07-26 02:09:05.516561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.736 [2024-07-26 02:09:05.516579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.736 [2024-07-26 02:09:05.516816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.736 [2024-07-26 02:09:05.517076] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.736 [2024-07-26 02:09:05.517099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.736 [2024-07-26 02:09:05.517115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.736 [2024-07-26 02:09:05.520683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.736 [2024-07-26 02:09:05.529951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.736 [2024-07-26 02:09:05.530356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.736 [2024-07-26 02:09:05.530387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.736 [2024-07-26 02:09:05.530405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.736 [2024-07-26 02:09:05.530642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.736 [2024-07-26 02:09:05.530884] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.736 [2024-07-26 02:09:05.530907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.736 [2024-07-26 02:09:05.530922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.736 [2024-07-26 02:09:05.534501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.736 [2024-07-26 02:09:05.543980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.736 [2024-07-26 02:09:05.544399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.736 [2024-07-26 02:09:05.544429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.736 [2024-07-26 02:09:05.544447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.736 [2024-07-26 02:09:05.544685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.736 [2024-07-26 02:09:05.544926] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.736 [2024-07-26 02:09:05.544949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.736 [2024-07-26 02:09:05.544965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.736 [2024-07-26 02:09:05.548551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.736 [2024-07-26 02:09:05.557826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.736 [2024-07-26 02:09:05.558220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.736 [2024-07-26 02:09:05.558251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.736 [2024-07-26 02:09:05.558269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.736 [2024-07-26 02:09:05.558507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.736 [2024-07-26 02:09:05.558748] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.736 [2024-07-26 02:09:05.558771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.736 [2024-07-26 02:09:05.558786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.736 [2024-07-26 02:09:05.562372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.736 [2024-07-26 02:09:05.571863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.736 [2024-07-26 02:09:05.572260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.736 [2024-07-26 02:09:05.572291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.736 [2024-07-26 02:09:05.572309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.736 [2024-07-26 02:09:05.572546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.736 [2024-07-26 02:09:05.572798] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.736 [2024-07-26 02:09:05.572822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.736 [2024-07-26 02:09:05.572836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.736 [2024-07-26 02:09:05.576414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.736 [2024-07-26 02:09:05.585894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.736 [2024-07-26 02:09:05.586289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.736 [2024-07-26 02:09:05.586320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.736 [2024-07-26 02:09:05.586337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.736 [2024-07-26 02:09:05.586575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.736 [2024-07-26 02:09:05.586817] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.736 [2024-07-26 02:09:05.586840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.736 [2024-07-26 02:09:05.586855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.737 [2024-07-26 02:09:05.590432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.737 [2024-07-26 02:09:05.599904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.737 [2024-07-26 02:09:05.600300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.737 [2024-07-26 02:09:05.600330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.737 [2024-07-26 02:09:05.600348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.737 [2024-07-26 02:09:05.600585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.737 [2024-07-26 02:09:05.600827] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.737 [2024-07-26 02:09:05.600850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.737 [2024-07-26 02:09:05.600865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.737 [2024-07-26 02:09:05.604440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.737 [2024-07-26 02:09:05.613931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.737 [2024-07-26 02:09:05.614355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.737 [2024-07-26 02:09:05.614391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.737 [2024-07-26 02:09:05.614410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.737 [2024-07-26 02:09:05.614647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.737 [2024-07-26 02:09:05.614889] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.737 [2024-07-26 02:09:05.614912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.737 [2024-07-26 02:09:05.614927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.737 [2024-07-26 02:09:05.618505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.737 [2024-07-26 02:09:05.627986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.737 [2024-07-26 02:09:05.628380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.737 [2024-07-26 02:09:05.628411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.737 [2024-07-26 02:09:05.628429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.737 [2024-07-26 02:09:05.628666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.737 [2024-07-26 02:09:05.628908] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.737 [2024-07-26 02:09:05.628931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.737 [2024-07-26 02:09:05.628946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.737 [2024-07-26 02:09:05.632522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.737 [2024-07-26 02:09:05.642008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.737 [2024-07-26 02:09:05.642396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.737 [2024-07-26 02:09:05.642427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.737 [2024-07-26 02:09:05.642444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.737 [2024-07-26 02:09:05.642682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.737 [2024-07-26 02:09:05.642924] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.737 [2024-07-26 02:09:05.642947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.737 [2024-07-26 02:09:05.642962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.737 [2024-07-26 02:09:05.646539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.737 [2024-07-26 02:09:05.656016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.737 [2024-07-26 02:09:05.656437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.737 [2024-07-26 02:09:05.656469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.737 [2024-07-26 02:09:05.656486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.737 [2024-07-26 02:09:05.656723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.737 [2024-07-26 02:09:05.656971] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.737 [2024-07-26 02:09:05.656995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.737 [2024-07-26 02:09:05.657010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.737 [2024-07-26 02:09:05.660586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.737 [2024-07-26 02:09:05.669851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.737 [2024-07-26 02:09:05.670266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.737 [2024-07-26 02:09:05.670297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.737 [2024-07-26 02:09:05.670315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.737 [2024-07-26 02:09:05.670553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.737 [2024-07-26 02:09:05.670795] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.737 [2024-07-26 02:09:05.670818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.737 [2024-07-26 02:09:05.670833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.737 [2024-07-26 02:09:05.674427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.737 [2024-07-26 02:09:05.683708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.737 [2024-07-26 02:09:05.684122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.737 [2024-07-26 02:09:05.684154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.737 [2024-07-26 02:09:05.684172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.737 [2024-07-26 02:09:05.684410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.737 [2024-07-26 02:09:05.684653] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.737 [2024-07-26 02:09:05.684676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.737 [2024-07-26 02:09:05.684691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.737 [2024-07-26 02:09:05.688275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.737 [2024-07-26 02:09:05.697543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.737 [2024-07-26 02:09:05.697956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.737 [2024-07-26 02:09:05.697986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.737 [2024-07-26 02:09:05.698004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.737 [2024-07-26 02:09:05.698251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.737 [2024-07-26 02:09:05.698494] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.737 [2024-07-26 02:09:05.698517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.737 [2024-07-26 02:09:05.698532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.737 [2024-07-26 02:09:05.702112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.737 [2024-07-26 02:09:05.711391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.737 [2024-07-26 02:09:05.711781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.737 [2024-07-26 02:09:05.711811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.737 [2024-07-26 02:09:05.711829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.737 [2024-07-26 02:09:05.712078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.737 [2024-07-26 02:09:05.712320] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.737 [2024-07-26 02:09:05.712343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.737 [2024-07-26 02:09:05.712359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.737 [2024-07-26 02:09:05.715928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.737 [2024-07-26 02:09:05.725438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.737 [2024-07-26 02:09:05.725834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.737 [2024-07-26 02:09:05.725865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.737 [2024-07-26 02:09:05.725882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.737 [2024-07-26 02:09:05.726130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.737 [2024-07-26 02:09:05.726373] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.737 [2024-07-26 02:09:05.726396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.737 [2024-07-26 02:09:05.726412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.737 [2024-07-26 02:09:05.729977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.738 [2024-07-26 02:09:05.739490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.738 [2024-07-26 02:09:05.739876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.738 [2024-07-26 02:09:05.739907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.738 [2024-07-26 02:09:05.739924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.738 [2024-07-26 02:09:05.740171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.738 [2024-07-26 02:09:05.740414] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.738 [2024-07-26 02:09:05.740437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.738 [2024-07-26 02:09:05.740452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.738 [2024-07-26 02:09:05.744028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.997 [2024-07-26 02:09:05.753523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.997 [2024-07-26 02:09:05.753932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.997 [2024-07-26 02:09:05.753963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.997 [2024-07-26 02:09:05.753987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.997 [2024-07-26 02:09:05.754236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.997 [2024-07-26 02:09:05.754479] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.997 [2024-07-26 02:09:05.754502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.997 [2024-07-26 02:09:05.754517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.997 [2024-07-26 02:09:05.758096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.997 [2024-07-26 02:09:05.767374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.997 [2024-07-26 02:09:05.767763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.997 [2024-07-26 02:09:05.767795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.997 [2024-07-26 02:09:05.767812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.998 [2024-07-26 02:09:05.768051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.998 [2024-07-26 02:09:05.768303] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.998 [2024-07-26 02:09:05.768326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.998 [2024-07-26 02:09:05.768341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.998 [2024-07-26 02:09:05.771914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.998 [2024-07-26 02:09:05.781413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.998 [2024-07-26 02:09:05.781858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.998 [2024-07-26 02:09:05.781907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.998 [2024-07-26 02:09:05.781924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.998 [2024-07-26 02:09:05.782174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.998 [2024-07-26 02:09:05.782416] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.998 [2024-07-26 02:09:05.782440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.998 [2024-07-26 02:09:05.782455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.998 [2024-07-26 02:09:05.786024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.998 [2024-07-26 02:09:05.795294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.998 [2024-07-26 02:09:05.795709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.998 [2024-07-26 02:09:05.795740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.998 [2024-07-26 02:09:05.795757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.998 [2024-07-26 02:09:05.795995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.998 [2024-07-26 02:09:05.796247] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.998 [2024-07-26 02:09:05.796281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.998 [2024-07-26 02:09:05.796298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.998 [2024-07-26 02:09:05.799870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.998 [2024-07-26 02:09:05.809149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.998 [2024-07-26 02:09:05.809557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.998 [2024-07-26 02:09:05.809588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.998 [2024-07-26 02:09:05.809606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.998 [2024-07-26 02:09:05.809844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.998 [2024-07-26 02:09:05.810096] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.998 [2024-07-26 02:09:05.810120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.998 [2024-07-26 02:09:05.810135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.998 [2024-07-26 02:09:05.813704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.998 [2024-07-26 02:09:05.822991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.998 [2024-07-26 02:09:05.823421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.998 [2024-07-26 02:09:05.823453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.998 [2024-07-26 02:09:05.823470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.998 [2024-07-26 02:09:05.823708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.998 [2024-07-26 02:09:05.823950] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.998 [2024-07-26 02:09:05.823973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.998 [2024-07-26 02:09:05.823988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.998 [2024-07-26 02:09:05.827568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.998 [2024-07-26 02:09:05.836834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.998 [2024-07-26 02:09:05.837227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.998 [2024-07-26 02:09:05.837258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.998 [2024-07-26 02:09:05.837275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.998 [2024-07-26 02:09:05.837513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.998 [2024-07-26 02:09:05.837754] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.998 [2024-07-26 02:09:05.837778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.998 [2024-07-26 02:09:05.837793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.998 [2024-07-26 02:09:05.841374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.998 [2024-07-26 02:09:05.850864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.998 [2024-07-26 02:09:05.851243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.998 [2024-07-26 02:09:05.851274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.998 [2024-07-26 02:09:05.851291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.998 [2024-07-26 02:09:05.851529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.998 [2024-07-26 02:09:05.851770] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.998 [2024-07-26 02:09:05.851793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.998 [2024-07-26 02:09:05.851808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.998 [2024-07-26 02:09:05.855388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.998 [2024-07-26 02:09:05.864865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.998 [2024-07-26 02:09:05.865293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.998 [2024-07-26 02:09:05.865324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.998 [2024-07-26 02:09:05.865341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.998 [2024-07-26 02:09:05.865578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.998 [2024-07-26 02:09:05.865820] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.998 [2024-07-26 02:09:05.865843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.998 [2024-07-26 02:09:05.865858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.998 [2024-07-26 02:09:05.869436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.998 [2024-07-26 02:09:05.878715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.998 [2024-07-26 02:09:05.879095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.998 [2024-07-26 02:09:05.879126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.998 [2024-07-26 02:09:05.879144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.998 [2024-07-26 02:09:05.879382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.998 [2024-07-26 02:09:05.879624] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.998 [2024-07-26 02:09:05.879646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.998 [2024-07-26 02:09:05.879662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.998 [2024-07-26 02:09:05.883247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.998 [2024-07-26 02:09:05.892738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.998 [2024-07-26 02:09:05.893148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.998 [2024-07-26 02:09:05.893180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.998 [2024-07-26 02:09:05.893197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.998 [2024-07-26 02:09:05.893440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.998 [2024-07-26 02:09:05.893683] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.998 [2024-07-26 02:09:05.893705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.998 [2024-07-26 02:09:05.893721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.998 [2024-07-26 02:09:05.897300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.998 [2024-07-26 02:09:05.906775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.998 [2024-07-26 02:09:05.907168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.998 [2024-07-26 02:09:05.907199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.998 [2024-07-26 02:09:05.907216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.999 [2024-07-26 02:09:05.907454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.999 [2024-07-26 02:09:05.907695] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.999 [2024-07-26 02:09:05.907719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.999 [2024-07-26 02:09:05.907734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.999 [2024-07-26 02:09:05.911321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.999 [2024-07-26 02:09:05.920793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.999 [2024-07-26 02:09:05.921186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.999 [2024-07-26 02:09:05.921216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.999 [2024-07-26 02:09:05.921233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.999 [2024-07-26 02:09:05.921471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.999 [2024-07-26 02:09:05.921713] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.999 [2024-07-26 02:09:05.921736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.999 [2024-07-26 02:09:05.921751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.999 [2024-07-26 02:09:05.925329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.999 [2024-07-26 02:09:05.934840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.999 [2024-07-26 02:09:05.935260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.999 [2024-07-26 02:09:05.935291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.999 [2024-07-26 02:09:05.935311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.999 [2024-07-26 02:09:05.935549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.999 [2024-07-26 02:09:05.935791] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.999 [2024-07-26 02:09:05.935814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.999 [2024-07-26 02:09:05.935834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.999 [2024-07-26 02:09:05.939413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.999 [2024-07-26 02:09:05.948694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.999 [2024-07-26 02:09:05.949086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.999 [2024-07-26 02:09:05.949117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.999 [2024-07-26 02:09:05.949135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.999 [2024-07-26 02:09:05.949372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.999 [2024-07-26 02:09:05.949615] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.999 [2024-07-26 02:09:05.949638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.999 [2024-07-26 02:09:05.949653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.999 [2024-07-26 02:09:05.953230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.999 [2024-07-26 02:09:05.962710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.999 [2024-07-26 02:09:05.963094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.999 [2024-07-26 02:09:05.963126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.999 [2024-07-26 02:09:05.963143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.999 [2024-07-26 02:09:05.963381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.999 [2024-07-26 02:09:05.963621] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.999 [2024-07-26 02:09:05.963644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.999 [2024-07-26 02:09:05.963658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.999 [2024-07-26 02:09:05.967235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.999 [2024-07-26 02:09:05.976723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.999 [2024-07-26 02:09:05.977120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.999 [2024-07-26 02:09:05.977151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.999 [2024-07-26 02:09:05.977169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.999 [2024-07-26 02:09:05.977406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.999 [2024-07-26 02:09:05.977648] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.999 [2024-07-26 02:09:05.977671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.999 [2024-07-26 02:09:05.977687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.999 [2024-07-26 02:09:05.981268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.999 [2024-07-26 02:09:05.990745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.999 [2024-07-26 02:09:05.991132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.999 [2024-07-26 02:09:05.991168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.999 [2024-07-26 02:09:05.991187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.999 [2024-07-26 02:09:05.991425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.999 [2024-07-26 02:09:05.991667] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.999 [2024-07-26 02:09:05.991689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.999 [2024-07-26 02:09:05.991704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.999 [2024-07-26 02:09:05.995344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.999 [2024-07-26 02:09:06.004620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.999 [2024-07-26 02:09:06.005011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.999 [2024-07-26 02:09:06.005042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:23.999 [2024-07-26 02:09:06.005068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:23.999 [2024-07-26 02:09:06.005309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:23.999 [2024-07-26 02:09:06.005551] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.999 [2024-07-26 02:09:06.005574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.999 [2024-07-26 02:09:06.005589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.259 [2024-07-26 02:09:06.009171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.259 [2024-07-26 02:09:06.018666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.259 [2024-07-26 02:09:06.019090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.259 [2024-07-26 02:09:06.019125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.259 [2024-07-26 02:09:06.019143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.259 [2024-07-26 02:09:06.019380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.259 [2024-07-26 02:09:06.019625] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.259 [2024-07-26 02:09:06.019647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.259 [2024-07-26 02:09:06.019663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.259 [2024-07-26 02:09:06.023245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.259 [2024-07-26 02:09:06.032539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.259 [2024-07-26 02:09:06.032952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.259 [2024-07-26 02:09:06.032984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.259 [2024-07-26 02:09:06.033001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.259 [2024-07-26 02:09:06.033249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.259 [2024-07-26 02:09:06.033498] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.259 [2024-07-26 02:09:06.033522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.259 [2024-07-26 02:09:06.033537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.259 [2024-07-26 02:09:06.037117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.259 [2024-07-26 02:09:06.046406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.259 [2024-07-26 02:09:06.046765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.259 [2024-07-26 02:09:06.046795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.259 [2024-07-26 02:09:06.046813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.259 [2024-07-26 02:09:06.047050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.259 [2024-07-26 02:09:06.047302] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.259 [2024-07-26 02:09:06.047325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.259 [2024-07-26 02:09:06.047341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.259 [2024-07-26 02:09:06.050922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.259 [2024-07-26 02:09:06.060437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.259 [2024-07-26 02:09:06.060846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.259 [2024-07-26 02:09:06.060878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.259 [2024-07-26 02:09:06.060895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.259 [2024-07-26 02:09:06.061144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.259 [2024-07-26 02:09:06.061387] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.259 [2024-07-26 02:09:06.061411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.259 [2024-07-26 02:09:06.061426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.259 [2024-07-26 02:09:06.065000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.259 [2024-07-26 02:09:06.074297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.259 [2024-07-26 02:09:06.074677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.259 [2024-07-26 02:09:06.074709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.259 [2024-07-26 02:09:06.074726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.259 [2024-07-26 02:09:06.074965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.259 [2024-07-26 02:09:06.075218] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.259 [2024-07-26 02:09:06.075243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.259 [2024-07-26 02:09:06.075258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.259 [2024-07-26 02:09:06.078833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.259 [2024-07-26 02:09:06.088320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.259 [2024-07-26 02:09:06.088706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.259 [2024-07-26 02:09:06.088737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.259 [2024-07-26 02:09:06.088754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.259 [2024-07-26 02:09:06.088992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.259 [2024-07-26 02:09:06.089244] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.259 [2024-07-26 02:09:06.089267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.259 [2024-07-26 02:09:06.089283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.259 [2024-07-26 02:09:06.092856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.259 [2024-07-26 02:09:06.102347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.259 [2024-07-26 02:09:06.102757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.259 [2024-07-26 02:09:06.102787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.259 [2024-07-26 02:09:06.102804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.259 [2024-07-26 02:09:06.103042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.259 [2024-07-26 02:09:06.103294] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.259 [2024-07-26 02:09:06.103318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.259 [2024-07-26 02:09:06.103334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.259 [2024-07-26 02:09:06.106906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.259 [2024-07-26 02:09:06.116200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.259 [2024-07-26 02:09:06.116597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.259 [2024-07-26 02:09:06.116628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.259 [2024-07-26 02:09:06.116646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.259 [2024-07-26 02:09:06.116884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.259 [2024-07-26 02:09:06.117138] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.259 [2024-07-26 02:09:06.117162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.259 [2024-07-26 02:09:06.117177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.259 [2024-07-26 02:09:06.120747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.259 [2024-07-26 02:09:06.130235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.259 [2024-07-26 02:09:06.130622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.259 [2024-07-26 02:09:06.130653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.259 [2024-07-26 02:09:06.130676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.259 [2024-07-26 02:09:06.130914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.259 [2024-07-26 02:09:06.131167] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.259 [2024-07-26 02:09:06.131200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.259 [2024-07-26 02:09:06.131216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.259 [2024-07-26 02:09:06.134788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.259 [2024-07-26 02:09:06.144073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.259 [2024-07-26 02:09:06.144511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.259 [2024-07-26 02:09:06.144558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.259 [2024-07-26 02:09:06.144576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.259 [2024-07-26 02:09:06.144814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.259 [2024-07-26 02:09:06.145057] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.260 [2024-07-26 02:09:06.145089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.260 [2024-07-26 02:09:06.145105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.260 [2024-07-26 02:09:06.148682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.260 [2024-07-26 02:09:06.157952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2416875 Killed "${NVMF_APP[@]}" "$@" 00:33:24.260 [2024-07-26 02:09:06.158397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.260 [2024-07-26 02:09:06.158447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.260 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:24.260 [2024-07-26 02:09:06.158464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.260 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:24.260 [2024-07-26 02:09:06.158702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.260 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:24.260 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:24.260 [2024-07-26 02:09:06.158943] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.260 [2024-07-26 02:09:06.158966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.260 [2024-07-26 02:09:06.158983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.260 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.260 [2024-07-26 02:09:06.162559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.260 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2418043 00:33:24.260 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:24.260 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2418043 00:33:24.260 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2418043 ']' 00:33:24.260 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.260 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:24.260 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.260 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:24.260 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.260 [2024-07-26 02:09:06.171836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.260 [2024-07-26 02:09:06.172263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.260 [2024-07-26 02:09:06.172294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.260 [2024-07-26 02:09:06.172312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.260 [2024-07-26 02:09:06.172549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.260 [2024-07-26 02:09:06.172791] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.260 [2024-07-26 02:09:06.172814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.260 [2024-07-26 02:09:06.172829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.260 [2024-07-26 02:09:06.176423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.260 [2024-07-26 02:09:06.185704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.260 [2024-07-26 02:09:06.186097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.260 [2024-07-26 02:09:06.186129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.260 [2024-07-26 02:09:06.186146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.260 [2024-07-26 02:09:06.186385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.260 [2024-07-26 02:09:06.186627] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.260 [2024-07-26 02:09:06.186650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.260 [2024-07-26 02:09:06.186665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.260 [2024-07-26 02:09:06.190247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.260 [2024-07-26 02:09:06.199733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.260 [2024-07-26 02:09:06.200151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.260 [2024-07-26 02:09:06.200182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.260 [2024-07-26 02:09:06.200200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.260 [2024-07-26 02:09:06.200439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.260 [2024-07-26 02:09:06.200689] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.260 [2024-07-26 02:09:06.200713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.260 [2024-07-26 02:09:06.200728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.260 [2024-07-26 02:09:06.204312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.260 [2024-07-26 02:09:06.211568] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:33:24.260 [2024-07-26 02:09:06.211637] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:24.260 [2024-07-26 02:09:06.213587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.260 [2024-07-26 02:09:06.214000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.260 [2024-07-26 02:09:06.214031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.260 [2024-07-26 02:09:06.214049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.260 [2024-07-26 02:09:06.214295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.260 [2024-07-26 02:09:06.214538] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.260 [2024-07-26 02:09:06.214561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.260 [2024-07-26 02:09:06.214577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.260 [2024-07-26 02:09:06.218154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.260 [2024-07-26 02:09:06.227587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.260 [2024-07-26 02:09:06.228002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.260 [2024-07-26 02:09:06.228034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.260 [2024-07-26 02:09:06.228051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.260 [2024-07-26 02:09:06.228299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.260 [2024-07-26 02:09:06.228542] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.260 [2024-07-26 02:09:06.228564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.260 [2024-07-26 02:09:06.228580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.260 [2024-07-26 02:09:06.232154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.260 [2024-07-26 02:09:06.241624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.260 [2024-07-26 02:09:06.242025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.260 [2024-07-26 02:09:06.242056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.260 [2024-07-26 02:09:06.242083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.260 [2024-07-26 02:09:06.242321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.260 [2024-07-26 02:09:06.242563] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.260 [2024-07-26 02:09:06.242591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.260 [2024-07-26 02:09:06.242607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.260 [2024-07-26 02:09:06.246184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.260 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.260 [2024-07-26 02:09:06.255541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.260 [2024-07-26 02:09:06.255931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.260 [2024-07-26 02:09:06.255962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.260 [2024-07-26 02:09:06.255980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.260 [2024-07-26 02:09:06.256228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.260 [2024-07-26 02:09:06.256471] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.260 [2024-07-26 02:09:06.256494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.260 [2024-07-26 02:09:06.256510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.261 [2024-07-26 02:09:06.260091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.519 [2024-07-26 02:09:06.269579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.519 [2024-07-26 02:09:06.269956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.519 [2024-07-26 02:09:06.269987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.519 [2024-07-26 02:09:06.270004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.520 [2024-07-26 02:09:06.270253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.520 [2024-07-26 02:09:06.270496] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.520 [2024-07-26 02:09:06.270520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.520 [2024-07-26 02:09:06.270535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.520 [2024-07-26 02:09:06.274111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.520 [2024-07-26 02:09:06.283618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.520 [2024-07-26 02:09:06.284033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.520 [2024-07-26 02:09:06.284071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.520 [2024-07-26 02:09:06.284091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.520 [2024-07-26 02:09:06.284328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.520 [2024-07-26 02:09:06.284570] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.520 [2024-07-26 02:09:06.284593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.520 [2024-07-26 02:09:06.284608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.520 [2024-07-26 02:09:06.288187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.520 [2024-07-26 02:09:06.288245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:24.520 [2024-07-26 02:09:06.297484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.520 [2024-07-26 02:09:06.298029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.520 [2024-07-26 02:09:06.298078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.520 [2024-07-26 02:09:06.298102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.520 [2024-07-26 02:09:06.298352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.520 [2024-07-26 02:09:06.298601] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.520 [2024-07-26 02:09:06.298625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.520 [2024-07-26 02:09:06.298644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.520 [2024-07-26 02:09:06.302228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.520 [2024-07-26 02:09:06.311524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.520 [2024-07-26 02:09:06.312010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.520 [2024-07-26 02:09:06.312046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.520 [2024-07-26 02:09:06.312074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.520 [2024-07-26 02:09:06.312319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.520 [2024-07-26 02:09:06.312563] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.520 [2024-07-26 02:09:06.312586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.520 [2024-07-26 02:09:06.312604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.520 [2024-07-26 02:09:06.316185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.520 [2024-07-26 02:09:06.325452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.520 [2024-07-26 02:09:06.325853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.520 [2024-07-26 02:09:06.325884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.520 [2024-07-26 02:09:06.325904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.520 [2024-07-26 02:09:06.326157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.520 [2024-07-26 02:09:06.326401] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.520 [2024-07-26 02:09:06.326425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.520 [2024-07-26 02:09:06.326441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.520 [2024-07-26 02:09:06.330006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.520 [2024-07-26 02:09:06.339496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.520 [2024-07-26 02:09:06.339973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.520 [2024-07-26 02:09:06.340011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.520 [2024-07-26 02:09:06.340042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.520 [2024-07-26 02:09:06.340299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.520 [2024-07-26 02:09:06.340547] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.520 [2024-07-26 02:09:06.340572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.520 [2024-07-26 02:09:06.340591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.520 [2024-07-26 02:09:06.344173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.520 [2024-07-26 02:09:06.353463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.520 [2024-07-26 02:09:06.353970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.520 [2024-07-26 02:09:06.354011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.520 [2024-07-26 02:09:06.354032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.520 [2024-07-26 02:09:06.354293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.520 [2024-07-26 02:09:06.354543] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.520 [2024-07-26 02:09:06.354568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.520 [2024-07-26 02:09:06.354587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.520 [2024-07-26 02:09:06.358164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.520 [2024-07-26 02:09:06.367435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.520 [2024-07-26 02:09:06.367849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.520 [2024-07-26 02:09:06.367880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.520 [2024-07-26 02:09:06.367897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.520 [2024-07-26 02:09:06.368147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.520 [2024-07-26 02:09:06.368391] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.520 [2024-07-26 02:09:06.368415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.520 [2024-07-26 02:09:06.368430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.520 [2024-07-26 02:09:06.372000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.520 [2024-07-26 02:09:06.381308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.520 [2024-07-26 02:09:06.381718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.520 [2024-07-26 02:09:06.381750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.520 [2024-07-26 02:09:06.381768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.520 [2024-07-26 02:09:06.382008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.520 [2024-07-26 02:09:06.382223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:24.520 [2024-07-26 02:09:06.382260] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:24.520 [2024-07-26 02:09:06.382277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the[2024-07-26 02:09:06.382270] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error only 00:33:24.520 state 00:33:24.520 [2024-07-26 02:09:06.382294] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:24.520 [2024-07-26 02:09:06.382297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.520 [2024-07-26 02:09:06.382306] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:24.520 [2024-07-26 02:09:06.382313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.520 [2024-07-26 02:09:06.382390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:24.520 [2024-07-26 02:09:06.382413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:24.520 [2024-07-26 02:09:06.382417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.520 [2024-07-26 02:09:06.385883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.520 [2024-07-26 02:09:06.395191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.520 [2024-07-26 02:09:06.395797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.520 [2024-07-26 02:09:06.395842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.520 [2024-07-26 02:09:06.395865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.520 [2024-07-26 02:09:06.396127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.521 [2024-07-26 02:09:06.396376] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.521 [2024-07-26 02:09:06.396401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.521 [2024-07-26 02:09:06.396422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.521 [2024-07-26 02:09:06.399997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.521 [2024-07-26 02:09:06.409316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.521 [2024-07-26 02:09:06.409896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.521 [2024-07-26 02:09:06.409941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.521 [2024-07-26 02:09:06.409963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.521 [2024-07-26 02:09:06.410225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.521 [2024-07-26 02:09:06.410475] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.521 [2024-07-26 02:09:06.410499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.521 [2024-07-26 02:09:06.410518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.521 [2024-07-26 02:09:06.414103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.521 [2024-07-26 02:09:06.423406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.521 [2024-07-26 02:09:06.423993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.521 [2024-07-26 02:09:06.424040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.521 [2024-07-26 02:09:06.424084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.521 [2024-07-26 02:09:06.424337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.521 [2024-07-26 02:09:06.424585] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.521 [2024-07-26 02:09:06.424609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.521 [2024-07-26 02:09:06.424628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.521 [2024-07-26 02:09:06.428212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.521 [2024-07-26 02:09:06.437507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.521 [2024-07-26 02:09:06.438026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.521 [2024-07-26 02:09:06.438073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.521 [2024-07-26 02:09:06.438096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.521 [2024-07-26 02:09:06.438344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.521 [2024-07-26 02:09:06.438591] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.521 [2024-07-26 02:09:06.438615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.521 [2024-07-26 02:09:06.438634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.521 [2024-07-26 02:09:06.442212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.521 [2024-07-26 02:09:06.451502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.521 [2024-07-26 02:09:06.452069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.521 [2024-07-26 02:09:06.452117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.521 [2024-07-26 02:09:06.452139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.521 [2024-07-26 02:09:06.452392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.521 [2024-07-26 02:09:06.452641] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.521 [2024-07-26 02:09:06.452665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.521 [2024-07-26 02:09:06.452685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.521 [2024-07-26 02:09:06.456267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.521 [2024-07-26 02:09:06.465559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.521 [2024-07-26 02:09:06.466146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.521 [2024-07-26 02:09:06.466191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.521 [2024-07-26 02:09:06.466213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.521 [2024-07-26 02:09:06.466464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.521 [2024-07-26 02:09:06.466728] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.521 [2024-07-26 02:09:06.466753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.521 [2024-07-26 02:09:06.466772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.521 [2024-07-26 02:09:06.470350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.521 [2024-07-26 02:09:06.479641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.521 [2024-07-26 02:09:06.480045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.521 [2024-07-26 02:09:06.480084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.521 [2024-07-26 02:09:06.480103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.521 [2024-07-26 02:09:06.480342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.521 [2024-07-26 02:09:06.480584] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.521 [2024-07-26 02:09:06.480607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.521 [2024-07-26 02:09:06.480623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.521 [2024-07-26 02:09:06.484199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.521 [2024-07-26 02:09:06.493113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.521 [2024-07-26 02:09:06.493452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.521 [2024-07-26 02:09:06.493480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.521 [2024-07-26 02:09:06.493496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.521 [2024-07-26 02:09:06.493711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.521 [2024-07-26 02:09:06.493929] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.521 [2024-07-26 02:09:06.493950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.521 [2024-07-26 02:09:06.493964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.521 [2024-07-26 02:09:06.497188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.521 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:24.521 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:33:24.521 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:24.521 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:24.521 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.521 [2024-07-26 02:09:06.506752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.521 [2024-07-26 02:09:06.507114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.521 [2024-07-26 02:09:06.507142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.521 [2024-07-26 02:09:06.507159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.521 [2024-07-26 02:09:06.507372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.521 [2024-07-26 02:09:06.507600] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.521 [2024-07-26 02:09:06.507622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.521 [2024-07-26 02:09:06.507636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.521 [2024-07-26 02:09:06.510906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.521 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:24.521 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:24.521 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.521 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.521 [2024-07-26 02:09:06.520298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.521 [2024-07-26 02:09:06.520618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.521 [2024-07-26 02:09:06.520646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.521 [2024-07-26 02:09:06.520662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.521 [2024-07-26 02:09:06.520876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.521 [2024-07-26 02:09:06.521104] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.522 [2024-07-26 02:09:06.521126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.522 [2024-07-26 02:09:06.521141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.522 [2024-07-26 02:09:06.522247] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:24.522 [2024-07-26 02:09:06.524411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.782 [2024-07-26 02:09:06.533843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.782 [2024-07-26 02:09:06.534197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.782 [2024-07-26 02:09:06.534225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.782 [2024-07-26 02:09:06.534240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.782 [2024-07-26 02:09:06.534454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.782 [2024-07-26 02:09:06.534672] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.782 [2024-07-26 02:09:06.534693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.782 [2024-07-26 02:09:06.534707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.782 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.782 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:24.782 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.782 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.782 [2024-07-26 02:09:06.537921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.782 [2024-07-26 02:09:06.547498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.782 [2024-07-26 02:09:06.548003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.782 [2024-07-26 02:09:06.548050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.782 [2024-07-26 02:09:06.548077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.782 [2024-07-26 02:09:06.548313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.782 [2024-07-26 02:09:06.548535] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.782 [2024-07-26 02:09:06.548557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.782 [2024-07-26 02:09:06.548575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.782 [2024-07-26 02:09:06.551796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.782 Malloc0 00:33:24.782 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.782 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:24.782 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.782 [2024-07-26 02:09:06.561177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.782 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.782 [2024-07-26 02:09:06.561666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.783 [2024-07-26 02:09:06.561699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.783 [2024-07-26 02:09:06.561719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.783 [2024-07-26 02:09:06.561943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.783 [2024-07-26 02:09:06.562176] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.783 [2024-07-26 02:09:06.562199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.783 [2024-07-26 02:09:06.562215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.783 [2024-07-26 02:09:06.565487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.783 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.783 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:24.783 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.783 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.783 [2024-07-26 02:09:06.574685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.783 [2024-07-26 02:09:06.575082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.783 [2024-07-26 02:09:06.575111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feaed0 with addr=10.0.0.2, port=4420 00:33:24.783 [2024-07-26 02:09:06.575127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feaed0 is same with the state(5) to be set 00:33:24.783 [2024-07-26 02:09:06.575342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feaed0 (9): Bad file descriptor 00:33:24.783 [2024-07-26 02:09:06.575560] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:24.783 [2024-07-26 02:09:06.575581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:24.783 [2024-07-26 02:09:06.575595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:24.783 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.783 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:24.783 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.783 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.783 [2024-07-26 02:09:06.578874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:24.783 [2024-07-26 02:09:06.580372] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.783 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.783 02:09:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2417167 00:33:24.783 [2024-07-26 02:09:06.588334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:24.783 [2024-07-26 02:09:06.618399] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:34.757 00:33:34.757 Latency(us) 00:33:34.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.757 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:34.757 Verification LBA range: start 0x0 length 0x4000 00:33:34.757 Nvme1n1 : 15.01 6713.23 26.22 8464.05 0.00 8408.32 831.34 21554.06 00:33:34.757 =================================================================================================================== 00:33:34.757 Total : 6713.23 26.22 8464.05 0.00 8408.32 831.34 21554.06 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:34.757 rmmod nvme_tcp 00:33:34.757 rmmod nvme_fabrics 00:33:34.757 rmmod nvme_keyring 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2418043 ']' 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2418043 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2418043 ']' 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2418043 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2418043 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2418043' 00:33:34.757 killing process with pid 2418043 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2418043 00:33:34.757 02:09:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2418043 00:33:34.757 02:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:34.757 02:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:34.757 02:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:34.757 02:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:34.757 02:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:34.757 02:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.757 02:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:34.757 02:09:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.137 02:09:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:36.137 00:33:36.137 real 0m22.083s 00:33:36.137 user 0m59.482s 00:33:36.137 sys 0m4.065s 00:33:36.137 02:09:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:36.137 02:09:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:36.137 ************************************ 00:33:36.137 END TEST nvmf_bdevperf 00:33:36.137 ************************************ 00:33:36.396 02:09:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:36.396 02:09:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:36.396 02:09:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:36.396 02:09:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.396 ************************************ 00:33:36.396 START TEST nvmf_target_disconnect 00:33:36.396 ************************************ 00:33:36.396 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:36.396 * Looking for test storage... 00:33:36.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:36.396 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:36.396 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:36.397 02:09:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:38.303 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:38.303 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:38.303 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:38.303 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:38.304 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:38.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:38.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:33:38.304 00:33:38.304 --- 10.0.0.2 ping statistics --- 00:33:38.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.304 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:38.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:38.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:33:38.304 00:33:38.304 --- 10.0.0.1 ping statistics --- 00:33:38.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.304 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:38.304 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:38.562 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:38.562 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:38.563 ************************************ 00:33:38.563 START TEST nvmf_target_disconnect_tc1 00:33:38.563 ************************************ 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:38.563 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.563 [2024-07-26 02:09:20.428801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.563 [2024-07-26 02:09:20.428871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc0e70 with addr=10.0.0.2, port=4420 00:33:38.563 [2024-07-26 02:09:20.428911] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:38.563 [2024-07-26 02:09:20.428938] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:38.563 [2024-07-26 02:09:20.428952] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:38.563 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:38.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:38.563 Initializing NVMe Controllers 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:38.563 00:33:38.563 real 0m0.093s 00:33:38.563 user 0m0.040s 00:33:38.563 sys 0m0.052s 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:38.563 ************************************ 00:33:38.563 END TEST nvmf_target_disconnect_tc1 00:33:38.563 ************************************ 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:38.563 ************************************ 00:33:38.563 START TEST nvmf_target_disconnect_tc2 00:33:38.563 ************************************ 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2421577 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2421577 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2421577 ']' 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:38.563 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:38.563 [2024-07-26 02:09:20.536709] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:33:38.563 [2024-07-26 02:09:20.536799] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.563 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.822 [2024-07-26 02:09:20.605449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:38.822 [2024-07-26 02:09:20.691850] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:38.822 [2024-07-26 02:09:20.691906] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:38.822 [2024-07-26 02:09:20.691935] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:38.822 [2024-07-26 02:09:20.691947] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:38.822 [2024-07-26 02:09:20.691957] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:38.822 [2024-07-26 02:09:20.692006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:38.822 [2024-07-26 02:09:20.692075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:38.822 [2024-07-26 02:09:20.692134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:38.822 [2024-07-26 02:09:20.692137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:38.822 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:38.822 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:33:38.822 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:38.822 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:38.822 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:38.822 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:38.822 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:38.822 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.822 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:39.081 Malloc0 00:33:39.081 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.081 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:39.082 [2024-07-26 02:09:20.859851] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:39.082 [2024-07-26 02:09:20.888119] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2421720 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:39.082 02:09:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:39.082 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.988 02:09:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2421577 00:33:40.988 02:09:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Write completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Write completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Write completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Write completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Write completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Write completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Write completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Write completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Write completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 [2024-07-26 02:09:22.912390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Write completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Write completed with error (sct=0, sc=8) 00:33:40.988 starting I/O failed 00:33:40.988 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 [2024-07-26 02:09:22.912768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 [2024-07-26 02:09:22.913081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Write completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 Read completed with error (sct=0, sc=8) 00:33:40.989 starting I/O failed 00:33:40.989 [2024-07-26 02:09:22.913401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.989 [2024-07-26 02:09:22.913598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.989 [2024-07-26 02:09:22.913643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.989 qpair failed and we were unable to recover it. 00:33:40.989 [2024-07-26 02:09:22.913761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.913788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.913937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.913962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.914103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.914130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.914251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.914277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.914432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.914458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.914593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.914619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.914744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.914772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.914917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.914942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.915098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.915124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.915242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.915267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.915387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.915414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.915603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.915629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.915728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.915753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.915877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.915919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.916072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.916103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.916256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.916281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.916407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.916433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.916551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.916576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.916687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.916712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.916886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.916915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.917071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.917117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.917255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.917280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.917422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.917447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.917584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.917609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.917716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.917742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.917933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.917959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.918078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.918107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.918246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.918272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.918379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.918404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.918543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.918569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.918730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.918756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.918884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.918923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.919043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.919080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.990 [2024-07-26 02:09:22.919192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.990 [2024-07-26 02:09:22.919218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.990 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.919327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.919359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.919527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.919553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.919662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.919687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.919823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.919848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.919966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.920004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.920157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.920185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.920301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.920328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.920507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.920533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.920693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.920718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.920844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.920870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.920986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.921014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.921150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.921190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.921319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.921357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.921543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.921594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.921761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.921787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.921896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.921922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.922113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.922140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.922279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.922304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.922424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.922450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.922594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.922620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.922754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.922800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.923037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.923070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.923178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.923204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.923316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.923341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.923475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.923501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.923642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.923667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.923800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.923825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.923967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.923992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.924113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.924139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.924279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.924305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.924441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.924465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.924572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.924599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.991 qpair failed and we were unable to recover it. 00:33:40.991 [2024-07-26 02:09:22.924733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.991 [2024-07-26 02:09:22.924760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.924928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.924953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.925070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.925096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.925239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.925267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.925408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.925434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.925597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.925623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.925864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.925890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.926034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.926068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.926175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.926200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.926333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.926363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.926504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.926529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.926673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.926699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.926801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.926826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.926934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.926960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.927106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.927133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.927244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.927270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.927397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.927436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.927577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.927605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.927744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.927771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.927906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.927933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.928046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.928078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.928188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.928214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.928345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.928371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.928533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.928559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.928667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.928694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.928859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.928885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.928988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.929015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.929164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.929197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.929310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.929337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.929498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.929524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.929687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.929715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.929879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.929907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.930024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.992 [2024-07-26 02:09:22.930050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.992 qpair failed and we were unable to recover it. 00:33:40.992 [2024-07-26 02:09:22.930194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.930220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.930335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.930361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.930519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.930548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.930736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.930789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.930930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.930958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.931106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.931134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.931265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.931292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.931464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.931491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.931652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.931678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.931788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.931814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.931992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.932021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.932143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.932170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.932333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.932359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.932492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.932519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.932688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.932715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.932879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.932905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.933043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.933077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.933184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.933211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.933351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.933378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.933566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.933592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.933750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.933776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.933889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.933917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.934052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.934092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.934204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.934230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.934330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.934356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.934496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.934521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.934657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.934683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.934820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.934846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.934964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.934992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.935117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.935145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.935268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.935313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.935468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.935495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.935656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.993 [2024-07-26 02:09:22.935682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.993 qpair failed and we were unable to recover it. 00:33:40.993 [2024-07-26 02:09:22.935846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.935874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.935993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.936025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.936179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.936205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.936318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.936343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.936478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.936504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.936617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.936643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.936814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.936839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.936947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.936973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.937117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.937143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.937280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.937308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.937435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.937479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.937605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.937635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.937810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.937836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.937985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.938024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.938144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.938173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.938322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.938350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.938485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.938512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.938662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.938692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.938813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.938842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.938997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.939023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.939135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.939163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.939295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.939321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.939439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.939465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.939573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.939600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.939763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.939789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.939919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.939945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.994 [2024-07-26 02:09:22.940084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.994 [2024-07-26 02:09:22.940111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.994 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.940223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.940251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.940444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.940483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.940655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.940683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.940811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.940854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.941020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.941046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.941171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.941198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.941333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.941359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.941468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.941494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.941638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.941665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.941801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.941827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.941966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.941992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.942103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.942130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.942246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.942272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.942407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.942434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.942572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.942603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.942712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.942741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.942908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.942935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.943043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.943076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.943211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.943237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.943343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.943369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.943517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.943544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.943677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.943704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.943866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.943892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.944052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.944084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.944221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.944247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.944353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.944378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.944525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.944550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.944684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.944710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.944853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.944879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.945039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.995 [2024-07-26 02:09:22.945071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.995 qpair failed and we were unable to recover it. 00:33:40.995 [2024-07-26 02:09:22.945213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.945239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.945384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.945410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.945546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.945572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.945786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.945812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.945921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.945947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.946091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.946118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.946258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.946284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.946421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.946466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.946627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.946653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.946781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.946807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.946941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.946967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.947088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.947115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.947267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.947310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.947504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.947548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.947660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.947685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.947846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.947871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.948036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.948068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.948230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.948256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.948402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.948445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.948577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.948621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.948757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.948783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.948937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.948976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.949109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.949137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.949249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.949276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.949464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.949500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.949652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.949679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.949809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.949835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.949949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.949977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.950130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.950169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.950369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.950395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.996 [2024-07-26 02:09:22.950533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.996 [2024-07-26 02:09:22.950559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.996 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.950695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.950722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.950849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.950878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.951069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.951095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.951201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.951227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.951408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.951434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.951567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.951592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.951703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.951729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.951871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.951897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.952027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.952053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.952205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.952233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.952375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.952401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.952535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.952560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.952789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.952815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.952949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.952974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.953115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.953159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.953310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.953341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.953467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.953498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.953642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.953672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.953829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.953855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.953961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.953987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.954107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.954134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.954275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.954302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.954407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.954433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.954543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.954569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.954743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.954769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.954931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.954957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.955093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.955119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.955260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.955287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.955408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.955457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.955589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.955614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.955766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.955807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.955922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.955949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.997 [2024-07-26 02:09:22.956090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.997 [2024-07-26 02:09:22.956120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.997 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.956287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.956318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.956459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.956485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.956624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.956651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.956787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.956813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.956967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.956993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.957134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.957161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.957300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.957327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.957503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.957530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.957660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.957687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.957857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.957884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.958046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.958077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.958192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.958219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.958344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.958374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.958516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.958545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.958710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.958737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.958897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.958923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.959056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.959092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.959203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.959230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.959397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.959423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.959557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.959583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.959779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.959805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.959946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.959974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.960138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.960166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.960272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.960297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.960441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.960467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.960576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.960601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.960710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.960736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.960851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.960879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.961015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.961042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.961160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.961187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.961344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.961370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.961471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.961497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.998 qpair failed and we were unable to recover it. 00:33:40.998 [2024-07-26 02:09:22.961633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.998 [2024-07-26 02:09:22.961659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.961797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.961823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.961984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.962010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.962153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.962179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.962297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.962323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.962517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.962543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.962649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.962676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.962818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.962847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.962955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.962985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.963121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.963147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.963276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.963303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.963442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.963467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.963572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.963597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.963708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.963735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.963874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.963900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.964013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.964040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.964185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.964212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.964407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.964433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.964575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.964601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.964802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.964827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.964963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.964990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.965103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.965129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.965274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.965300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.965435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.965460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.965570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.965596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.965771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.965799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.965937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.965963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.966089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.966117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.966285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.966313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.966447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.999 [2024-07-26 02:09:22.966474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:40.999 qpair failed and we were unable to recover it. 00:33:40.999 [2024-07-26 02:09:22.966630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.966674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.966812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.966837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.967000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.967027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.967182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.967222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.967330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.967357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.967475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.967501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.967605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.967631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.967762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.967790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.967948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.967974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.968105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.968131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.968264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.968290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.968449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.968477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.968622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.968651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.968808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.968834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.968947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.968973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.969114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.969140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.969263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.969292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.969458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.969501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.969711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.969745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.969931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.969956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.970066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.970092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.970201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.970227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.970370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.970396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.970528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.970554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.970663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.970688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.970808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.970837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.970997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.971023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.971190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.971229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.971346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.971374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.971476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.000 [2024-07-26 02:09:22.971502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.000 qpair failed and we were unable to recover it. 00:33:41.000 [2024-07-26 02:09:22.971632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.971658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.971769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.971795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.971915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.971941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.972089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.972117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.972254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.972281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.972420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.972446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.972585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.972611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.972739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.972765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.972879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.972905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.973008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.973033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.973174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.973200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.973343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.973369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.973530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.973572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.973771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.973800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.973984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.974010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.974146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.974186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.974304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.974332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.974473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.974498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.974626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.974657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.974882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.974909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.975043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.975075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.975211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.975237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.975370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.975396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.975502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.975527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.975688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.975714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.975862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.975904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.976071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.976098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.976234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.976261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.976414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.976448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.976596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.976624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.976774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.976804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.976978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.977017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.001 [2024-07-26 02:09:22.977176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.001 [2024-07-26 02:09:22.977204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.001 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.977367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.977411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.977556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.977605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.977714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.977741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.977904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.977930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.978074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.978102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.978241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.978267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.978407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.978432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.978547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.978573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.978689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.978714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.978863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.978890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.979030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.979056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.979206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.979234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.979390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.979415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.979609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.979635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.979769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.979794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.979898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.979922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.980073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.980113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.980251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.980278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.980390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.980417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.980568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.980598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.980742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.980771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.980936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.980962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.981086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.981114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.981246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.981271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.981432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.981457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.981567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.981594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.981758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.981783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.981917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.981942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.982080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.982108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.982257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.982284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.982451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.982476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.982577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.002 [2024-07-26 02:09:22.982603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.002 qpair failed and we were unable to recover it. 00:33:41.002 [2024-07-26 02:09:22.982796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.982822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.982929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.982954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.983098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.983126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.983239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.983270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.983438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.983463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.983562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.983589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.983749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.983774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.983936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.983962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.984087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.984115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.984224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.984250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.984483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.984510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.984623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.984650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.984785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.984811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.985001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.985027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.985168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.985195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.985309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.985335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.985479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.985506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.985649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.985676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.985811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.985838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.985970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.985996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.986111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.986138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.986276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.986301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.986475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.986501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.986611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.986638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.986756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.986782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.986920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.986947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.987119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.987146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.987255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.987281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.987425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.987451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.987584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.987609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.987751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.987776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.987911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.987937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.003 [2024-07-26 02:09:22.988099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.003 [2024-07-26 02:09:22.988125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.003 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.988287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.988312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.988432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.988470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.988615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.988642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.988757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.988783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.988954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.988980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.989115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.989141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.989269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.989295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.989430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.989457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.989581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.989606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.989785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.989813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.989984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.990014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.990133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.990161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.990303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.990328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.990466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.990493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.990662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.990703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.990852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.990881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.990996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.991038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.991193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.991232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.991377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.991405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.991516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.991542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.991649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.991675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.991820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.991845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.991986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.992011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.992149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.992177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.992286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.992312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.992422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.992449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.992653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.992681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.992792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.992820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.004 qpair failed and we were unable to recover it. 00:33:41.004 [2024-07-26 02:09:22.992957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.004 [2024-07-26 02:09:22.992982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.993174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.993206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.993365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.993391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.993523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.993547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.993683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.993709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.993847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.993871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.993982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.994009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.994139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.994169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.994303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.994328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.994446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.994472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.994609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.994634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.994770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.994796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.994933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.994959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.995079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.995108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.995247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.995273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.995433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.995459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.995587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.995613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.995745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.995771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.995877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.995902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.996065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.996092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.996202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.996228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.996361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.996388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.996546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.996577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.996691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.996718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.996859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.996886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.279 qpair failed and we were unable to recover it. 00:33:41.279 [2024-07-26 02:09:22.996999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.279 [2024-07-26 02:09:22.997024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:22.997208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:22.997236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:22.997393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:22.997421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:22.997599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:22.997642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:22.997751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:22.997778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:22.997913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:22.997939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:22.998054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:22.998092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:22.998277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:22.998320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:22.998441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:22.998484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:22.998621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:22.998647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:22.998809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:22.998835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:22.999022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:22.999067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:22.999202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:22.999260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:22.999452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:22.999480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:22.999614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:22.999656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:22.999820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:22.999846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:22.999964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:22.999992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.000129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:23.000155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.000291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:23.000318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.000429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:23.000454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.000594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:23.000620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.000761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:23.000787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.000959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:23.000986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.001116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:23.001142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.001258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:23.001291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.001455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:23.001496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.001636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:23.001663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.001796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:23.001823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.001963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:23.001989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.002138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:23.002177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.002294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:23.002321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.002483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:23.002513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.002676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.280 [2024-07-26 02:09:23.002702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.280 qpair failed and we were unable to recover it. 00:33:41.280 [2024-07-26 02:09:23.002839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.002866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.003028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.003054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.003257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.003283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.003495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.003520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.003652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.003682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.003808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.003838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.004006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.004035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.004183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.004210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.004315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.004341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.004473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.004499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.004634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.004661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.004792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.004818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.004950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.004976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.005111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.005138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.005297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.005323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.005468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.005494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.005673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.005712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.005860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.005887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.006068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.006094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.006232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.006258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.006400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.006426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.006582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.006609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.006744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.006772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.006881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.006907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.007075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.007102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.007214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.007240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.007378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.007405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.007512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.007538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.007672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.007699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.007833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.007858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.007993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.008017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.008134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.008165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.008304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.008329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.281 qpair failed and we were unable to recover it. 00:33:41.281 [2024-07-26 02:09:23.008462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.281 [2024-07-26 02:09:23.008486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.008611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.008639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.008798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.008822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.008945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.008984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.009103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.009130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.009243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.009269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.009439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.009482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.009623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.009678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.009791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.009817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.009933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.009960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.010098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.010124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.010284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.010310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.010430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.010456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.010593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.010619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.010755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.010781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.010916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.010945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.011098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.011138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.011280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.011308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.011532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.011583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.011737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.011798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.011959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.011986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.012150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.012177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.012303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.012331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.012449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.012478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.012658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.012684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.012839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.012877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.013031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.013074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.013221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.013249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.013381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.013425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.013637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.013684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.013863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.013892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.014070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.014116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.014247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.014273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.282 [2024-07-26 02:09:23.014475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.282 [2024-07-26 02:09:23.014501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.282 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.014689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.014719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.014871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.014897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.015065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.015092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.015228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.015254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.015462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.015496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.015614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.015640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.015853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.015904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.016069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.016096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.016227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.016252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.016387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.016412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.016559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.016585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.016694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.016719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.016849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.016874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.017023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.017057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.017204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.017229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.017361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.017387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.017534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.017575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.017731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.017760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.017895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.017924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.018077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.018104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.018245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.018271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.018418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.018460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.018579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.018622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.018741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.018782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.018888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.018913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.019073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.019112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.019253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.019280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.019434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.019479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.019649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.019675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.019804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.019829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.019959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.019985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.020158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.020190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.020325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.283 [2024-07-26 02:09:23.020351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.283 qpair failed and we were unable to recover it. 00:33:41.283 [2024-07-26 02:09:23.020463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.020488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.020623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.020648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.020785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.020811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.020919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.020944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.021044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.021075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.021239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.021265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.021376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.021402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.021573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.021598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.021733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.021758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.021934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.021960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.022071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.022100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.022233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.022259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.022411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.022437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.022549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.022576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.022733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.022759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.022898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.022925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.023040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.023071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.023234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.023260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.023392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.023417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.023516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.023542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.023683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.023717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.023857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.023886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.024030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.024066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.024202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.024228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.024391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.024417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.024566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.024596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.024731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.024757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.024933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.284 [2024-07-26 02:09:23.024959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.284 qpair failed and we were unable to recover it. 00:33:41.284 [2024-07-26 02:09:23.025096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.025122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.025289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.025315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.025458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.025483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.025588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.025614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.025753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.025779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.025898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.025927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.026085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.026123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.026294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.026322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.026492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.026519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.026650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.026677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.026785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.026813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.026982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.027009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.027137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.027164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.027298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.027323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.027436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.027461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.027635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.027660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.027787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.027812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.027971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.027999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.028183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.028209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.028318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.028344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.028505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.028531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.028758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.028784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.028919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.028945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.029131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.029158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.029269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.029295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.029473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.029498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.029629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.029655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.029842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.029870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.029979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.030005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.030162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.030190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.030294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.030320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.030431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.030457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.030591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.030617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.285 [2024-07-26 02:09:23.030735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.285 [2024-07-26 02:09:23.030761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.285 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.030952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.030981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.031143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.031169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.031275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.031301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.031421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.031449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.031598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.031628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.031788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.031814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.031955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.031981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.032115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.032142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.032274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.032300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.032440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.032466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.032592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.032621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.032847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.032873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.033034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.033070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.033208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.033233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.033343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.033368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.033491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.033520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.033682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.033707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.033842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.033868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.034033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.034069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.034216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.034242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.034423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.034452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.034610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.034635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.034753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.034779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.034895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.034935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.035075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.035104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.035258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.035302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.035419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.035447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.035582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.035607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.035742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.035778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.035915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.035941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.036077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.036104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.036268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.036314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.036446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.286 [2024-07-26 02:09:23.036489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.286 qpair failed and we were unable to recover it. 00:33:41.286 [2024-07-26 02:09:23.036685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.036711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.036851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.036877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.036988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.037015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.037176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.037202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.037382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.037408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.037552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.037579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.037745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.037774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.037910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.037936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.038099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.038141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.038255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.038281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.038452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.038482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.038703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.038729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.038874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.038900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.039092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.039118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.039230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.039256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.039417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.039443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.039560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.039585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.039696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.039722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.039851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.039877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.039987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.040020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.040161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.040200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.040347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.040375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.040550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.040577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.040704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.040731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.040833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.040859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.041000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.041027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.041145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.041172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.041337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.041363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.041504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.041530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.041662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.041691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.041805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.041846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.041980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.042006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.042145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.042171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.042297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.287 [2024-07-26 02:09:23.042325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.287 qpair failed and we were unable to recover it. 00:33:41.287 [2024-07-26 02:09:23.042481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.042510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.042685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.042730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.042872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.042899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.043069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.043096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.043239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.043266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.043414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.043440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.043634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.043661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.043802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.043830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.043955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.043983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.044149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.044175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.044285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.044311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.044509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.044535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.044665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.044691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.044795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.044823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.044928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.044962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.045130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.045157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.045316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.045341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.045502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.045528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.045691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.045717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.045826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.045853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.045967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.045993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.046150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.046176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.046288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.046315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.046453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.046479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.046606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.046635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.046807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.046836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.047009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.047035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.047179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.047205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.047341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.047367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.047476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.047501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.047608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.047634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.047775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.047803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.047977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.048013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.048172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.288 [2024-07-26 02:09:23.048199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.288 qpair failed and we were unable to recover it. 00:33:41.288 [2024-07-26 02:09:23.048357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.048405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.048588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.048632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.048818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.048861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.048995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.049022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.049171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.049199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.049352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.049387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.049496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.049530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.049699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.049726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.049865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.049892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.050021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.050047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.050161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.050187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.050331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.050357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.050472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.050499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.050661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.050687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.050805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.050832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.050964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.050992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.051113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.051140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.051320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.051346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.051481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.051508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.051672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.051698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.051802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.051828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.051964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.051992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.052125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.052153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.052288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.052332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.052506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.052565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.052695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.052739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.052879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.052905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.053044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.289 [2024-07-26 02:09:23.053082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.289 qpair failed and we were unable to recover it. 00:33:41.289 [2024-07-26 02:09:23.053186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.053213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.053346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.053384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.053491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.053518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.053627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.053653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.053813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.053839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.053973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.054000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.054131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.054174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.054305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.054331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.054469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.054496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.054609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.054635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.054756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.054782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.054926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.054952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.055125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.055152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.055283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.055327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.055453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.055481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.055625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.055658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.055772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.055798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.055956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.055981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.056120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.056146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.056283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.056327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.056461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.056487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.056648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.056677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.056799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.056825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.056995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.057023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.057175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.057203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.057364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.057390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.057501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.057526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.057642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.057669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.057840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.057866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.057972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.057998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.058125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.058169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.058331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.058360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.058478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.058504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.290 [2024-07-26 02:09:23.058636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.290 [2024-07-26 02:09:23.058662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.290 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.058795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.058820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.058929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.058956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.059124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.059156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.059317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.059343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.059477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.059504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.059640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.059666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.059776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.059802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.059969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.059995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.060112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.060139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.060305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.060331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.060442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.060470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.060616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.060642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.060791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.060821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.060972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.060998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.061180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.061207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.061324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.061351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.061526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.061552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.061657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.061684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.061855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.061882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.062006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.062034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.062176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.062203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.062355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.062389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.062497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.062524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.062654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.062698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.062831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.062857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.062968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.062996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.063131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.063164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.063309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.063353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.063534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.063562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.063727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.063758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.063869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.063896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.064005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.064033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.064156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.291 [2024-07-26 02:09:23.064183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.291 qpair failed and we were unable to recover it. 00:33:41.291 [2024-07-26 02:09:23.064297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.064323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.064467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.064493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.064651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.064677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.064810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.064835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.064950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.064976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.065164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.065191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.065297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.065323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.065499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.065525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.065655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.065680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.065793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.065819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.065930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.065955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.066126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.066152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.066283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.066309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.066421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.066447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.066553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.066579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.066728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.066753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.066926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.066955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.067144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.067170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.067273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.067298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.067446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.067472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.067670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.067695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.067829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.067856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.067985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.068010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.068191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.068221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.068352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.068381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.068547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.068575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.068696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.068725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.068881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.068922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.069052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.069084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.069218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.069244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.069402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.069428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.069566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.069592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.069735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.069761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.069905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.292 [2024-07-26 02:09:23.069947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.292 qpair failed and we were unable to recover it. 00:33:41.292 [2024-07-26 02:09:23.070072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.070114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.070253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.070279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.070384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.070410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.070530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.070557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.070703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.070731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.070890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.070916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.071030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.071056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.071169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.071195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.071346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.071396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.071555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.071582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.071698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.071725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.071833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.071860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.072038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.072089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.072255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.072283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.072428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.072455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.072599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.072625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.072830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.072863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.073002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.073028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.073150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.073176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.073312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.073338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.073519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.073545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.073654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.073691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.073798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.073824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.073979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.074004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.074182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.074208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.074391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.074420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.074694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.074738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.074925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.074951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.075099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.075126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.075277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.075303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.075533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.075562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.075740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.075769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.075934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.075963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.293 qpair failed and we were unable to recover it. 00:33:41.293 [2024-07-26 02:09:23.076113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.293 [2024-07-26 02:09:23.076139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.076256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.076282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.076432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.076461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.076590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.076632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.076793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.076819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.076960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.076987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.077179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.077205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.077309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.077334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.077508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.077536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.077749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.077775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.077884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.077910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.078073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.078115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.078228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.078254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.078357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.078383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.078557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.078582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.078745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.078789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.078924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.078950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.079091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.079118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.079255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.079281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.079414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.079457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.079596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.079625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.079752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.079794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.079967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.079996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.080180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.080206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.080361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.080400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.080595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.080625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.080731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.080758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.080919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.080945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.081052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.081086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.081188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.294 [2024-07-26 02:09:23.081215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.294 qpair failed and we were unable to recover it. 00:33:41.294 [2024-07-26 02:09:23.081375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.081401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.081505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.081532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.081648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.081685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.081857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.081883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.082036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.082068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.082231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.082275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.082425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.082468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.082595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.082643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.082794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.082820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.082933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.082959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.083123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.083161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.083270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.083297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.083433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.083460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.083623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.083649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.083816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.083842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.083982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.084008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.084179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.084206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.084352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.084378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.084537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.084565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.084726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.084753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.084917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.084943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.085072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.085100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.085215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.085241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.085391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.085416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.085539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.085569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.085735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.085760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.085897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.085923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.086037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.086068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.086205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.086248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.086384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.086410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.086514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.086540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.086712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.086738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.295 [2024-07-26 02:09:23.086897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.295 [2024-07-26 02:09:23.086923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.295 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.087038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.087073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.087229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.087268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.087410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.087438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.087546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.087572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.087706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.087733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.087873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.087899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.088018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.088046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.088220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.088249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.088398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.088426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.088596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.088650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.088844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.088870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.088972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.088998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.089164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.089190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.089298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.089324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.089439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.089465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.089656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.089684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.089808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.089838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.089998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.090025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.090201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.090227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.090327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.090378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.090493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.090526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.090688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.090714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.090851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.090876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.091080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.091117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.091244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.091270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.091404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.091430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.091555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.091599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.091831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.091857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.091996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.092026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.092167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.092194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.092358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.092384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.092521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.092562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.092734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.092763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.296 qpair failed and we were unable to recover it. 00:33:41.296 [2024-07-26 02:09:23.092893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.296 [2024-07-26 02:09:23.092936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.093131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.093157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.093293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.093318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.093489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.093515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.093683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.093709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.093813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.093839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.093965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.093991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.094162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.094189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.094299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.094325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.094432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.094458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.094592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.094648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.094825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.094852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.094990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.095017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.095125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.095152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.095256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.095282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.095462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.095488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.095597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.095623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.095775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.095803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.095951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.095980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.096114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.096157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.096263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.096288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.096469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.096497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.096660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.096690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.096799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.096824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.097012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.097040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.097183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.097209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.097336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.097384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.097539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.097575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.097719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.097744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.097874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.097899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.098034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.098085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.098214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.297 [2024-07-26 02:09:23.098240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.297 qpair failed and we were unable to recover it. 00:33:41.297 [2024-07-26 02:09:23.098388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.098415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.098553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.098579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.098770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.098799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.098951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.098976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.099142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.099169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.099283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.099309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.099499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.099528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.099698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.099727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.099894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.099920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.100020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.100046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.100247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.100273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.100420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.100446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.100563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.100589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.100729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.100755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.100875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.100901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.101076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.101102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.101236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.101262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.101402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.101452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.101575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.101610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.101757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.101786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.101933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.101962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.102157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.102184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.102296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.102322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.102493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.102519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.102700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.102725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.102828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.102854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.103051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.103096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.103254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.103280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.103415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.103440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.103544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.103570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.103750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.103775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.103898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.103924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.104072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.104099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.104255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.298 [2024-07-26 02:09:23.104281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.298 qpair failed and we were unable to recover it. 00:33:41.298 [2024-07-26 02:09:23.104420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.104446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.104585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.104611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.104742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.104768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.104903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.104929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.105074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.105101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.105263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.105289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.105433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.105464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.105594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.105620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.105799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.105828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.105976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.106005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.106168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.106194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.106359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.106402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.106552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.106581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.106721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.106750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.106944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.106973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.107157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.107183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.107316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.107341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.107481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.107509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.107696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.107722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.107835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.107867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.108011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.108036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.108188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.108214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.108345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.108379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.108510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.108553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.108737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.108763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.108902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.108927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.109070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.109097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.109237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.109280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.109403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.109432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.109571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.109599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.109754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.109779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.109909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.109935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.110071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.110098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.299 [2024-07-26 02:09:23.110208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.299 [2024-07-26 02:09:23.110234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.299 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.110347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.110381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.110517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.110542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.110673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.110699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.110873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.110898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.111038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.111069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.111215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.111257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.111423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.111449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.111585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.111611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.111774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.111800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.111911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.111938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.112101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.112127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.112262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.112291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.112471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.112498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.112636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.112672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.112803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.112829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.112976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.113002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.113108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.113134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.113301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.113348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.113513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.113540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.113669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.113695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.113831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.113856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.114048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.114086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.114266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.114294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.114455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.114481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.114617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.114643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.114768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.114794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.114991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.115020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.115158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.115185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.300 qpair failed and we were unable to recover it. 00:33:41.300 [2024-07-26 02:09:23.115316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.300 [2024-07-26 02:09:23.115341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.115457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.115483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.115589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.115615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.115749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.115775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.115908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.115950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.116087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.116113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.116227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.116253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.116388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.116414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.116549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.116575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.116707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.116733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.116842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.116867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.117016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.117044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.117196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.117223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.117352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.117378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.117541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.117570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.117739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.117768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.117899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.117930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.118071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.118098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.118258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.118284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.118424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.118450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.118572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.118598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.118732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.118757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.118913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.118941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.119122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.119148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.119284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.119310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.119457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.119483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.119615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.119640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.119811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.119837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.119952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.119981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.120121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.120164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.120328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.120353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.120512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.120538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.120680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.120706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.120822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.301 [2024-07-26 02:09:23.120848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.301 qpair failed and we were unable to recover it. 00:33:41.301 [2024-07-26 02:09:23.121007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.121033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.121315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.121342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.121505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.121532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.121684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.121726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.121870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.121896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.122034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.122071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.122202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.122227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.122341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.122367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.122500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.122526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.122707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.122735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.122900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.122926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.123068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.123094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.123233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.123258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.123409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.123438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.123623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.123648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.123808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.123834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.124006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.124031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.124171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.124197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.124334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.124359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.124501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.124527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.124651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.124677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.124826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.124852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.124985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.125011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.125154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.125180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.125315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.125340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.125503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.125529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.125638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.302 [2024-07-26 02:09:23.125663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.302 qpair failed and we were unable to recover it. 00:33:41.302 [2024-07-26 02:09:23.125782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.125808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.125943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.125971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.126127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.126156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.126308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.126334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.126493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.126519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.126626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.126652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.126788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.126815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.126970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.126998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.127129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.127155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.127296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.127322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.127459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.127501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.127646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.127671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.127804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.127829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.127939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.127972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.128112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.128138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.128316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.128342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.128461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.128505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.128661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.128687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.128801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.128827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.128943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.128969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.129097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.129123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.129283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.129312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.129492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.129518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.129618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.303 [2024-07-26 02:09:23.129648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.303 qpair failed and we were unable to recover it. 00:33:41.303 [2024-07-26 02:09:23.129801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.129828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.129961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.129993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.130147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.130173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.130283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.130309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.130440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.130465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.130641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.130667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.130816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.130841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.131031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.131063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.131221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.131250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.131392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.131421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.131608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.131637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.131789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.131814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.131989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.132018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.132181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.132210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.132389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.132414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.132550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.132576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.132679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.132705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.132868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.132898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.133082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.133108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.133248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.133274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.133384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.133409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.133547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.133573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.133730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.133759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.304 qpair failed and we were unable to recover it. 00:33:41.304 [2024-07-26 02:09:23.133884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.304 [2024-07-26 02:09:23.133926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.134082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.134126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.134260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.134286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.134437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.134467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.134604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.134629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.134741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.134767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.134903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.134932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.135081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.135110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.135272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.135298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.135429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.135455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.135623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.135667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.135805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.135831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.135965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.135990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.136130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.136173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.136316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.136345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.136529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.136555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.136670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.136695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.136843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.136869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.137064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.137094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.137277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.137302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.137432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.137458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.137637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.137666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.137826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.137854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.137967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.305 [2024-07-26 02:09:23.137996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.305 qpair failed and we were unable to recover it. 00:33:41.305 [2024-07-26 02:09:23.138131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.138157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.138287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.138313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.138468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.138498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.138611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.138639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.138769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.138794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.138957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.138983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.139160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.139190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.139331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.139357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.139491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.139518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.139674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.139703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.139861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.139886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.139993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.140018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.140147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.140173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.140312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.140355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.140464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.140493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.140616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.140644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.140789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.140815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.140972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.306 [2024-07-26 02:09:23.141016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.306 qpair failed and we were unable to recover it. 00:33:41.306 [2024-07-26 02:09:23.141164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.141195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.141309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.141337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.141493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.141519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.141661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.141687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.141877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.141906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.142068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.142098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.142230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.142257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.142399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.142425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.142595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.142623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.142771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.142800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.142958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.142984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.143124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.143151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.143291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.143317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.143479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.143520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.143647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.143673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.143786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.143812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.143919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.143945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.144116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.144146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.144299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.144324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.144462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.144508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.144650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.144678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.144791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.144819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.144970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.307 [2024-07-26 02:09:23.144995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.307 qpair failed and we were unable to recover it. 00:33:41.307 [2024-07-26 02:09:23.145101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.145128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.145295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.145324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.145442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.145470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.145652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.145678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.145865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.145894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.146033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.146068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.146219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.146263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.146427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.146454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.146590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.146634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.146760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.146788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.146947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.146976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.147152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.147179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.147315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.147342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.147477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.147507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.147658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.147686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.147857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.147883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.147982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.148018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.148175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.148201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.148331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.148357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.148517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.148547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.148704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.148733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.148882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.148910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.149021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.308 [2024-07-26 02:09:23.149049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.308 qpair failed and we were unable to recover it. 00:33:41.308 [2024-07-26 02:09:23.149216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.149242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.149352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.149378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.149569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.149595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.149731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.149757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.149930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.149955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.150107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.150133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.150269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.150295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.150474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.150502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.150671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.150697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.150812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.150838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.150988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.151014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.151170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.151196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.151309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.151335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.151477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.151520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.151676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.151704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.151845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.151873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.152057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.152087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.152232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.152259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.152376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.152402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.152535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.152560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.152669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.152694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.152845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.152887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.153005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.309 [2024-07-26 02:09:23.153034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.309 qpair failed and we were unable to recover it. 00:33:41.309 [2024-07-26 02:09:23.153180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.153207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.153358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.153384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.153519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.153545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.153720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.153763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.153878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.153907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.154078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.154109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.154220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.154246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.154423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.154452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.154606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.154635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.154803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.154829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.154965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.154991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.155148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.155175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.155339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.155382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.155567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.155593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.155739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.155768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.155893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.155922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.156073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.156104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.156255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.156281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.156457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.156486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.156661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.156709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.156868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.156896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.157078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.157104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.157253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.310 [2024-07-26 02:09:23.157279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.310 qpair failed and we were unable to recover it. 00:33:41.310 [2024-07-26 02:09:23.157445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.157474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.157650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.157676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.157833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.157858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.158038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.158078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.158240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.158266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.158435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.158463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.158647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.158673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.158778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.158804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.158974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.159004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.159137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.159163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.159297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.159322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.159482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.159510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.159635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.159663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.159847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.159876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.160062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.160088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.160228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.160254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.160386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.160411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.160585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.160618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.160748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.160781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.160894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.160919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.161119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.161145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.161286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.161312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.161513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.311 [2024-07-26 02:09:23.161539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.311 qpair failed and we were unable to recover it. 00:33:41.311 [2024-07-26 02:09:23.161675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.161704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.161877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.161905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.162148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.162174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.162281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.162307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.162440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.162468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.162622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.162649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.162789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.162816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.162931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.162958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.163081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.163119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.163232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.163258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.163390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.163425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.163570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.163602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.163715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.163741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.163904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.163938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.164070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.164104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.164269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.164295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.164401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.164427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.164582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.164612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.164763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.164792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.164970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.164999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.165182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.165212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.165329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.165356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.165546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.165579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.312 qpair failed and we were unable to recover it. 00:33:41.312 [2024-07-26 02:09:23.165740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.312 [2024-07-26 02:09:23.165766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.165883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.165909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.166072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.166101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.166266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.166292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.166414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.166446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.166564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.166590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.166776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.166804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.166924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.166952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.167097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.167125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.167248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.167275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.167442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.167472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.167605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.167636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.167775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.167811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.167969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.167998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.168118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.168148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.168301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.168330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.168458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.168491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.168633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.168660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.168849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.168877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.169022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.169051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.169208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.169235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.169344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.169370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.169491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.169520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.313 qpair failed and we were unable to recover it. 00:33:41.313 [2024-07-26 02:09:23.169675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.313 [2024-07-26 02:09:23.169707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.169869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.169902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.170072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.170102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.170261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.170289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.170444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.170474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.170651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.170677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.170789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.170814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.170974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.171002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.171146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.171174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.171294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.171320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.171463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.171489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.171641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.171670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.171832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.171862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.172008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.172034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.172220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.172252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.172380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.172418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.172569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.172598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.172785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.314 [2024-07-26 02:09:23.172811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.314 qpair failed and we were unable to recover it. 00:33:41.314 [2024-07-26 02:09:23.172927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.172953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.173088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.173121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.173273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.173318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.173453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.173479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.173599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.173625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.173762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.173797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.173916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.173954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.174132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.174159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.174297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.174341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.174494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.174521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.174684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.174715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.174914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.174941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.175101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.175131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.175260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.175297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.175477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.175507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.175670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.175696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.175829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.175856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.175986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.176023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.176176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.176202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.176304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.176330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.176476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.176501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.176643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.176682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.176810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.315 [2024-07-26 02:09:23.176839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.315 qpair failed and we were unable to recover it. 00:33:41.315 [2024-07-26 02:09:23.176962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.176988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.177115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.177142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.177277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.177302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.177468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.177504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.177695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.177722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.177830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.177873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.177994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.178024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.178186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.178216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.178375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.178400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.178514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.178541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.178676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.178705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.178890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.178917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.179020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.179046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.179227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.179253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.179385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.179414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.179584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.179611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.179717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.179743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.179882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.179907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.180034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.180068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.180231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.180263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.180371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.180397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.180534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.180559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.180731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.180761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.316 [2024-07-26 02:09:23.180926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.316 [2024-07-26 02:09:23.180953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.316 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.181086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.181121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.181232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.181275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.181418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.181447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.181600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.181634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.181788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.181813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.181927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.181953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.182089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.182126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.182300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.182330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.182486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.182511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.182616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.182646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.182814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.182843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.183011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.183038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.183166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.183192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.183331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.183373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.183559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.183589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.183700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.183729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.183888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.183915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.184029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.184078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.184226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.184260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.184426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.184455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.184639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.184666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.184779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.184805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.317 [2024-07-26 02:09:23.184944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.317 [2024-07-26 02:09:23.184971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.317 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.185147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.185177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.185314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.185340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.185482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.185509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.185650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.185680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.185833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.185862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.185996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.186021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.186167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.186194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.186313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.186358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.186508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.186537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.186691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.186719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.186889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.186919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.187082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.187121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.187284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.187311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.187452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.187486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.187597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.187641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.187828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.187855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.187993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.188037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.188200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.188226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.188323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.188349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.188508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.188536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.188691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.188733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.188900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.188927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.189071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.318 [2024-07-26 02:09:23.189097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.318 qpair failed and we were unable to recover it. 00:33:41.318 [2024-07-26 02:09:23.189252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.189282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.189463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.189492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.189644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.189670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.189782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.189807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.189968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.190004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.190189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.190215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.190327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.190357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.190482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.190508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.190669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.190698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.190827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.190869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.190987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.191013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.191157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.191184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.191308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.191335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.191471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.191497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.191684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.191709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.191847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.191891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.192021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.192049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.192189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.192216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.192356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.192381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.192499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.192544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.192701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.192728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.192891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.192917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.193057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.193091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.193196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.319 [2024-07-26 02:09:23.193222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.319 qpair failed and we were unable to recover it. 00:33:41.319 [2024-07-26 02:09:23.193379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.193406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.193599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.193626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.193732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.193757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.193861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.193886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.193995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.194020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.194166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.194192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.194305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.194332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.194472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.194515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.194638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.194666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.194793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.194823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.194961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.194987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.195122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.195148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.195278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.195308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.195481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.195515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.195681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.195716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.195830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.195856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.195974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.196001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.196114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.196140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.196303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.196329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.196486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.196515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.196647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.196676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.196849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.196878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.197004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.197045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.197220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.197247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.197469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.197522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.197749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.197779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.197917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.197943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.198083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.198134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.198306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.198341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.198484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.198526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.198659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.198686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.198800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.198826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.198987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.199013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.320 qpair failed and we were unable to recover it. 00:33:41.320 [2024-07-26 02:09:23.199211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.320 [2024-07-26 02:09:23.199237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.199346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.199371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.199593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.199623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.199756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.199786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.199948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.199973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.200111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.200138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.200266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.200292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.200414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.200440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.200585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.200610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.200817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.200851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.201070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.201112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.201239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.201268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.201416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.201445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.201592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.201618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.201788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.201814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.201986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.202012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.202181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.202207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.202314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.202340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.202489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.202514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.202669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.202698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.202868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.202921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.203064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.203091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.203251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.203277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.203404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.203454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.203620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.203646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.203821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.203848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.203964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.203990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.204157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.204187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.204353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.204378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.321 [2024-07-26 02:09:23.204496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.321 [2024-07-26 02:09:23.204522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.321 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.204686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.204713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.204874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.204902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.205034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.205071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.205217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.205243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.205422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.205451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.205585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.205631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.205792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.205827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.205984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.206011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.206148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.206191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.206306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.206350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.206519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.206545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.206676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.206719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.206826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.206861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.206998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.207027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.207165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.207192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.207338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.207366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.207486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.207530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.207698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.207724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.207836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.207862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.208027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.208057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.208238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.208264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.208416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.208444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.208581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.208610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.208742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.208776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.208895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.208921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.209078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.209110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.209239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.209265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.209376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.209402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.209541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.209583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.322 [2024-07-26 02:09:23.209718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.322 [2024-07-26 02:09:23.209748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.322 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.209894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.209927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.210070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.210099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.210257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.210300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.210414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.210457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.210568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.210596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.210757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.210783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.210921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.210947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.211057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.211094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.211206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.211232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.211336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.211362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.211466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.211502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.211645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.211677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.211789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.211815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.211956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.211981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.212140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.212178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.212327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.212356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.212517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.212546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.212697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.212723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.212863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.212906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.213086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.213118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.213248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.213278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.213409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.213435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.213549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.213576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.213742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.213772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.323 qpair failed and we were unable to recover it. 00:33:41.323 [2024-07-26 02:09:23.213902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.323 [2024-07-26 02:09:23.213931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.214094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.214121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.214235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.214262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.214434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.214462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.214644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.214674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.214834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.214867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.214986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.215028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.215185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.215214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.215388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.215416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.215553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.215580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.215745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.215787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.215937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.215965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.216116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.216154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.216290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.216316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.216444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.216469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.216626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.216655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.216819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.216849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.216968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.216994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.217131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.217174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.217329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.217358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.217530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.217557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.217697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.217723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.217874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.217902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.218044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.218082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.218239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.218268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.324 [2024-07-26 02:09:23.218398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.324 [2024-07-26 02:09:23.218424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.324 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.218565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.218607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.218725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.218753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.218869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.218897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.219038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.219069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.219222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.219249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.219400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.219426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.219591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.219622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.219781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.219806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.219930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.219957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.220129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.220158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.220307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.220336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.220480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.220506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.220725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.220752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.220902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.220931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.221095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.221121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.221265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.221292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.221394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.221420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.221559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1553ef0 is same with the state(5) to be set 00:33:41.325 [2024-07-26 02:09:23.221734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.221772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.221914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.221941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.222073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.222118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.222253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.222280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.222439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.222465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.222575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.222601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.222720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.222746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.325 qpair failed and we were unable to recover it. 00:33:41.325 [2024-07-26 02:09:23.222878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.325 [2024-07-26 02:09:23.222903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.223039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.223072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.223222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.223248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.223383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.223409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.223520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.223545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.223705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.223734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.223901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.223927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.224032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.224064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.224183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.224208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.224311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.224337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.224474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.224500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.224606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.224631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.224762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.224788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.224898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.224924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.225045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.225087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.225201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.225227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.225364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.225390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.225553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.225584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.225767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.225793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.225913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.225939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.226084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.226111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.226246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.226272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.226431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.226457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.226627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.226653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.226808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.226833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.226941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.226967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.227096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.227123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.227265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.227291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.227406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.227449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.227580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.227606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.227765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.227791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.227958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.227986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.228149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.228175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.228298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.228324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.228435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.326 [2024-07-26 02:09:23.228461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.326 qpair failed and we were unable to recover it. 00:33:41.326 [2024-07-26 02:09:23.228597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.228623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.228787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.228813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.228947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.228972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.229087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.229114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.229224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.229250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.229355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.229380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.229483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.229509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.229647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.229673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.229836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.229879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.230025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.230054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.230189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.230215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.230355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.230381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.230518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.230560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.230694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.230719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.230857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.230883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.231044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.231079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.231185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.231211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.231325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.231350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.231459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.231485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.231623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.231649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.231769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.231812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.231966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.231994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.232137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.232163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.232295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.232321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.232476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.232504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.232663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.232693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.232877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.232921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.233126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.233155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.233266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.233292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.233400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.233428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.233562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.233606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.233719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.233747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.233927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.233956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.327 qpair failed and we were unable to recover it. 00:33:41.327 [2024-07-26 02:09:23.234081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.327 [2024-07-26 02:09:23.234124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.234234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.234259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.234366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.234392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.234563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.234589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.234720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.234746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.234855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.234882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.235034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.235079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.235203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.235228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.235340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.235366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.235506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.235534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.235697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.235723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.235859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.235888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.236065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.236095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.236251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.236278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.236384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.236411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.236544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.236573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.236736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.236763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.236899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.236944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.237065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.237094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.237252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.237282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.237392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.237418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.237579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.237605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.237781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.237807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.237950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.237978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.238155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.238183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.238300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.238327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.238438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.238464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.238627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.238657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.238787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.238814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.238935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.238961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.239129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.239157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.239283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.239309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.239419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.239445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.239640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.239690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.239853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.239879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.240013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.240057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.240184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.240210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.240314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.240340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.240445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.328 [2024-07-26 02:09:23.240471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.328 qpair failed and we were unable to recover it. 00:33:41.328 [2024-07-26 02:09:23.240584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.240609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.240747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.240772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.240875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.240916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.241067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.241096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.241246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.241272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.241377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.241403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.241545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.241571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.241683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.241713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.241852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.241878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.242070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.242113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.242277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.242303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.242435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.242467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.242587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.242621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.242791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.242817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.242964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.242994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.243181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.243208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.243317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.243343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.243453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.243480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.243600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.243631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.243790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.243815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.243952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.243996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.244135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.244164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.244318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.244344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.244457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.244483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.244621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.244647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.244788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.244813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.244931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.244958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.245134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.245160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.245298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.245323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.245502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.245531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.245725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.245753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.245889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.245917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.246027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.246053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.246172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.246198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.246314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.246340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.246478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.246503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.246641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.246670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.246814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.246841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.246993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.247038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.247206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.329 [2024-07-26 02:09:23.247231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.329 qpair failed and we were unable to recover it. 00:33:41.329 [2024-07-26 02:09:23.247361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.247386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.247495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.247520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.247676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.247705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.247855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.247883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.247997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.248023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.248137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.248163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.248278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.248304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.248415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.248444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.248604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.248630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.248761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.248786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.248949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.248992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.249128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.249156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.249305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.249331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.249442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.249469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.249611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.249636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.249750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.249778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.249891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.249916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.250075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.250103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.250231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.250260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.250368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.250393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.250563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.250588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.250716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.250742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.250923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.250951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.251128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.251155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.251293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.251320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.251432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.251475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.251633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.251662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.251820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.251847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.252013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.252056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.252184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.252213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.252367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.252393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.252514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.252541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.252745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.252771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.252905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.252931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.253043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.253076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.253220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.253246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.253380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.253405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.253565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.253591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.253747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.253776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.253899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.253925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.330 [2024-07-26 02:09:23.254042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.330 [2024-07-26 02:09:23.254076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.330 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.254192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.254217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.254327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.254353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.254497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.254522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.254657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.254685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.254843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.254869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.254971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.254998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.255136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.255167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.255307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.255332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.255443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.255469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.255579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.255605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.255720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.255746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.255882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.255926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.256079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.256123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.256255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.256280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.256393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.256419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.256553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.256578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.256689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.256716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.256858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.256884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.257047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.257079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.257202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.257228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.257366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.257392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.257502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.257527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.257666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.257691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.257859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.257903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.258069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.258095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.258236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.258263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.258374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.258400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.258537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.258564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.258709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.258736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.258880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.258905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.259044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.259075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.259231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.259258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.259381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.259407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.259574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.259617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.259783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.259809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.331 [2024-07-26 02:09:23.259966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.331 [2024-07-26 02:09:23.259998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.331 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.260159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.260184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.260323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.260350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.260475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.260501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.260618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.260644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.260794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.260821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.261000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.261028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.261189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.261217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.261348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.261375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.261537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.261563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.261690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.261718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.261845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.261877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.261990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.262018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.262153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.262194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.262338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.262364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.262518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.262545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.262729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.262755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.262898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.262924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.263075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.263101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.263237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.263281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.263440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.263467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.263575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.263601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.263737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.263763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.263897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.263924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.264038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.264067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.264244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.264270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.264379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.264405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.264517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.264544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.264677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.264703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.264812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.264838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.264949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.264974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.265123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.265151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.265287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.265313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.265467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.265495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.265671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.265701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.265830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.265856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.266000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.266028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.266197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.266226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.266369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.266395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.266534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.266561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.266724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.266752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.266906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.266932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.267070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.267099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.267234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.332 [2024-07-26 02:09:23.267261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.332 qpair failed and we were unable to recover it. 00:33:41.332 [2024-07-26 02:09:23.267400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.267426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.267565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.267592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.267766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.267792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.267908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.267936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.268124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.268151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.268281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.268309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.268495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.268523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.268635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.268681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.268834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.268860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.268983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.269008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.269160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.269203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.269358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.269391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.269517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.269544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.269659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.269685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.269853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.269896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.270070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.270096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.270206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.270233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.270387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.270414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.270544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.270571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.270687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.270712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.270896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.270922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.271067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.271095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.271206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.271231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.271355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.271382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.271535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.271564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.271675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.271700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.271860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.271887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.272007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.272032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.272156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.272182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.272348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.272374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.272509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.272534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.272637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.272662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.272796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.272822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.272981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.273006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.273123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.273149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.273259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.273287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.273419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.273444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.273580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.273608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.273770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.273798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.273933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.273959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.274070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.274097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.274259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.274285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.274393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.274420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.274561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.333 [2024-07-26 02:09:23.274603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.333 qpair failed and we were unable to recover it. 00:33:41.333 [2024-07-26 02:09:23.274735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.616 [2024-07-26 02:09:23.274764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.616 qpair failed and we were unable to recover it. 00:33:41.616 [2024-07-26 02:09:23.274943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.616 [2024-07-26 02:09:23.274973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.616 qpair failed and we were unable to recover it. 00:33:41.616 [2024-07-26 02:09:23.275126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.616 [2024-07-26 02:09:23.275152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.616 qpair failed and we were unable to recover it. 00:33:41.616 [2024-07-26 02:09:23.275263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.616 [2024-07-26 02:09:23.275295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.616 qpair failed and we were unable to recover it. 00:33:41.616 [2024-07-26 02:09:23.275395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.616 [2024-07-26 02:09:23.275421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.616 qpair failed and we were unable to recover it. 00:33:41.616 [2024-07-26 02:09:23.275539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.616 [2024-07-26 02:09:23.275565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.616 qpair failed and we were unable to recover it. 00:33:41.616 [2024-07-26 02:09:23.275697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.616 [2024-07-26 02:09:23.275723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.616 qpair failed and we were unable to recover it. 00:33:41.616 [2024-07-26 02:09:23.275859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.616 [2024-07-26 02:09:23.275885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.616 qpair failed and we were unable to recover it. 00:33:41.616 [2024-07-26 02:09:23.276002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.616 [2024-07-26 02:09:23.276028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.616 qpair failed and we were unable to recover it. 00:33:41.616 [2024-07-26 02:09:23.276166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.616 [2024-07-26 02:09:23.276194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.616 qpair failed and we were unable to recover it. 00:33:41.616 [2024-07-26 02:09:23.276323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.616 [2024-07-26 02:09:23.276350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.616 qpair failed and we were unable to recover it. 00:33:41.616 [2024-07-26 02:09:23.276500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.616 [2024-07-26 02:09:23.276527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.616 qpair failed and we were unable to recover it. 00:33:41.616 [2024-07-26 02:09:23.276659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.616 [2024-07-26 02:09:23.276686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.616 qpair failed and we were unable to recover it. 00:33:41.616 [2024-07-26 02:09:23.276823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.616 [2024-07-26 02:09:23.276850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.616 qpair failed and we were unable to recover it. 00:33:41.616 [2024-07-26 02:09:23.276958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.616 [2024-07-26 02:09:23.276984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.616 qpair failed and we were unable to recover it. 00:33:41.616 [2024-07-26 02:09:23.277101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.616 [2024-07-26 02:09:23.277129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.616 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.277262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.277287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.277426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.277452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.277591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.277617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.277754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.277780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.277893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.277920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.278033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.278066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.278210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.278235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.278346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.278373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.278544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.278571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.278679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.278704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.278815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.278842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.278972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.278999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.279157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.279184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.279294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.279321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.279505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.279530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.279636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.279662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.279773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.279799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.279926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.279953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.280110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.280137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.280255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.280283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.280401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.280427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.280562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.280588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.280699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.280725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.280853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.280881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.281040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.281073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.281181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.281207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.281351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.281378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.281500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.281532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.281674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.281701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.281875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.281902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.282008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.282033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.282148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.282175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.282282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.282309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.282425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.282452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.282559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.282584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.282732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.282758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.282918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.282944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.283082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.283108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.283234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.283260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.283430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.283457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.283613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.283640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.283769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.283799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.617 [2024-07-26 02:09:23.283944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.617 [2024-07-26 02:09:23.283972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.617 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.284133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.284159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.284288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.284314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.284480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.284509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.284618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.284644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.284812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.284838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.284970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.284998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.285137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.285166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.285270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.285297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.285403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.285429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.285533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.285561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.285717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.285746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.285902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.285927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.286034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.286065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.286181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.286207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.286342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.286370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.286483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.286509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.286664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.286691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.286850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.286878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.287036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.287069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.287187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.287213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.287446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.287472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.287628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.287654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.287766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.287806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.287940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.287967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.288129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.288159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.288276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.288303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.288434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.288461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.288600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.288627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.288794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.288821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.288957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.288984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.289155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.289182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.289287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.289313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.289415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.289442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.289562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.289587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.289718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.289747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.289861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.289889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.290000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.618 [2024-07-26 02:09:23.290026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.618 qpair failed and we were unable to recover it. 00:33:41.618 [2024-07-26 02:09:23.290181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.290207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.290320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.290349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.290514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.290540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.290651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.290678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.290813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.290840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.290980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.291006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.291119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.291155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.291296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.291322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.291458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.291485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.291646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.291673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.291836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.291863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.291973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.292000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.292111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.292139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.292259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.292285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.292397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.292423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.292560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.292587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.292719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.292746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.292879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.292906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.293013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.293039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.293157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.293185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.293305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.293331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.293445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.293472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.293625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.293652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.293789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.293817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.293955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.293981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.294117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.294144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.294252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.294279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.294415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.294441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.294561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.294588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.294695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.294720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.294860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.294887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.295023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.295049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.295194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.295221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.295359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.295386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.295502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.295529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.295662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.295703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.295851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.295880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.619 [2024-07-26 02:09:23.296043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.619 [2024-07-26 02:09:23.296089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.619 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.296252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.296278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.296423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.296450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.296610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.296637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.296756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.296786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.296950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.296976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.297113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.297141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.297283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.297310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.297417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.297443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.297580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.297606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.297747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.297774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.297940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.297967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.298132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.298159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.298272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.298301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.298409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.298435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.298547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.298574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.298713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.298739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.298877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.298909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.299031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.299057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.299177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.299203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.299344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.299370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.299484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.299511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.299623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.299649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.299815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.299841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.299951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.299978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.300119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.300146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.300250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.300277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.300446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.300473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.300581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.300608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.300722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.300748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.300884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.300909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.301024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.301052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.301198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.301225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.301395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.301424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.301537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.301563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.301669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.620 [2024-07-26 02:09:23.301695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.620 qpair failed and we were unable to recover it. 00:33:41.620 [2024-07-26 02:09:23.301832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.301858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.301991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.302017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.302134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.302161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.302296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.302322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.302432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.302458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.302562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.302590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.302707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.302733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.302862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.302889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.303004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.303029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.303144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.303171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.303312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.303339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.303446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.303472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.303580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.303606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.303768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.303793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.303961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.303987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.304109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.304135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.304251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.304277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.304418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.304444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.304554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.304580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.304744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.304770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.304917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.304945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.305088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.305121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.305236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.305262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.305396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.305423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.305567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.305592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.305700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.305726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.305890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.305916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.306025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.306050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.306224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.306250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.306391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.306417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.306526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.306552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.306717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.306743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.306876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.306901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.307012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.307039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.307182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.307207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.307334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.621 [2024-07-26 02:09:23.307361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.621 qpair failed and we were unable to recover it. 00:33:41.621 [2024-07-26 02:09:23.307521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.307547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.307654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.307681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.307790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.307815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.307921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.307947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.308054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.308093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.308231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.308259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.308396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.308422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.308563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.308588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.308726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.308755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.308878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.308903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.309046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.309078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.309192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.309218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.309390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.309416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.309556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.309581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.309715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.309740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.309846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.309871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.310005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.310031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.310142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.310168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.310303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.310329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.310446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.310472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.310608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.310634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.310757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.310783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.310945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.310971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.311084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.311109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.311223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.311249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.311358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.311390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.311557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.311585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.311731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.311756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.311862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.311888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.312017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.312043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.312165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.312192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.312300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.312326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.312531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.312558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.312688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.312714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.312850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.312876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.313008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.313034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.313145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.313171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.313311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.313337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.622 [2024-07-26 02:09:23.313450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.622 [2024-07-26 02:09:23.313476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.622 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.313648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.313675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.313783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.313809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.313914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.313939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.314106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.314146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.314294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.314322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.314454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.314480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.314588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.314614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.314749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.314776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.314911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.314937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.315038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.315070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.315212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.315239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.315379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.315405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.315530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.315556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.315670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.315696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.315804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.315831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.315939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.315966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.316106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.316135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.316248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.316275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.316430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.316456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.316598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.316624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.316758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.316784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.316925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.316951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.317083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.317122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.317240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.317267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.317401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.317427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.317570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.317597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.317704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.317735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.317849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.317875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.317991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.318017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.318132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.318159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.318278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.318306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.318444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.318470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.318576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.318601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.318739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.318766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.318878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.318904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.319037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.319068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.319179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.319206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.319331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.319370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.319496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.319523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.319690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.623 [2024-07-26 02:09:23.319716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.623 qpair failed and we were unable to recover it. 00:33:41.623 [2024-07-26 02:09:23.319830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.319857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.319968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.319996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.320108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.320135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.320247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.320273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.320414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.320441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.320585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.320612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.320753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.320780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.320883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.320909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.321024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.321050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.321173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.321198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.321346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.321371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.321475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.321500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.321637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.321665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.321770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.321800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.321917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.321943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.322077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.322103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.322215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.322242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.322377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.322403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.322540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.322567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.322706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.322732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.322843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.322873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.323023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.323051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.323196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.323223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.323324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.323350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.323475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.323500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.323611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.323637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.323748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.323777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.323925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.323952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.324084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.324111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.324246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.324272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.324418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.324445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.324585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.324611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.324748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.324775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.324881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.324907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.325076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.325102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.325204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.325230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.325362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.325388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.325495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.325521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.325626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.325653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.325764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.325791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.624 [2024-07-26 02:09:23.325957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.624 [2024-07-26 02:09:23.325983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.624 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.326098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.326125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.326234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.326261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.326394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.326420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.326559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.326586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.326700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.326726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.326887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.326913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.327025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.327052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.327208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.327233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.327368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.327394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.327501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.327527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.327696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.327722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.327855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.327881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.327989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.328016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.328195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.328234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.328354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.328382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.328524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.328551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.328687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.328714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.328850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.328877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.328990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.329017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.329130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.329157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.329270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.329296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.329427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.329453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.329564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.329590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.329699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.329725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.329893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.329918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.330029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.330056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.330177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.330203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.330317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.330342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.330480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.330505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.330620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.330645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.330783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.330809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.330944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.330969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.331084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.331111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.331222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.331248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.331357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.331383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.331491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.331517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.331624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.625 [2024-07-26 02:09:23.331649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.625 qpair failed and we were unable to recover it. 00:33:41.625 [2024-07-26 02:09:23.331789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.331814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.331965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.331991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.332092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.332118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.332248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.332286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.332429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.332457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.332594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.332621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.332761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.332788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.332911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.332938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.333047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.333080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.333183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.333209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.333318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.333344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.333461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.333487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.333632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.333658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.333801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.333827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.333967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.333993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.334098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.334124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.334233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.334259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.334398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.334424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.334533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.334561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.334669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.334696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.334811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.334841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.335004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.335030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.335161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.335188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.335326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.335352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.335482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.335508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.335694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.335723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.335890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.335917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.336024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.336050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.336185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.336228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.336411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.336460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.336590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.336635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.336761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.336805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.336972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.336999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.337177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.337205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.337372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.337399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.337538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.337579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.337727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.337757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.337934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.337963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.338157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.338187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.338334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.338364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.338514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.338544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.626 [2024-07-26 02:09:23.338691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.626 [2024-07-26 02:09:23.338721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.626 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.338845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.338875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.339027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.339056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.339219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.339246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.339396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.339425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.339549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.339578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.339765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.339791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.339928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.339954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.340067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.340094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.340258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.340284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.340410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.340440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.340588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.340617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.340751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.340782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.340956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.340985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.341112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.341139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.341281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.341308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.341416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.341459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.341622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.341649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.341866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.341895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.342047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.342078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.342209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.342236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.342369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.342396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.342578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.342607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.342779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.342808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.342952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.342978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.343141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.343168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.343282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.343308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.343446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.343473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.343658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.343691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.343829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.343871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.344027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.344054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.344196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.344223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.344327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.344371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.344597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.344624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.344787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.344816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.344928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.344956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.345108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.345135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.345290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.345320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.345511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.345538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.345679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.345706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.345892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.345920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.346090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.627 [2024-07-26 02:09:23.346132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.627 qpair failed and we were unable to recover it. 00:33:41.627 [2024-07-26 02:09:23.346242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.346286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.346414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.346443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.346624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.346654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.346808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.346837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.346974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.346999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.347107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.347135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.347290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.347329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.347438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.347466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.347579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.347607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.347746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.347772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.347935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.347962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.348073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.348101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.348248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.348274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.348415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.348458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.348614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.348644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.348796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.348827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.348989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.349015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.349137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.349162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.349272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.349317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.349465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.349494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.349653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.349679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.349827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.349854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.349978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.350005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.350146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.350172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.350308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.350334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.350494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.350523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.350651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.350685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.350806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.350834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.350990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.351017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.351134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.351160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.351267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.351293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.351444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.351471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.351612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.351641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.351795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.351824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.352018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.352069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.352185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.352214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.352348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.352377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.352566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.352593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.352734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.352760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.352870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.352895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.353007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.353035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.353223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.628 [2024-07-26 02:09:23.353253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.628 qpair failed and we were unable to recover it. 00:33:41.628 [2024-07-26 02:09:23.353368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.353396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.353548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.353578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.353732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.353761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.353922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.353948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.354078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.354106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.354267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.354312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.354480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.354506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.354679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.354704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.354842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.354868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.355028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.355054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.355236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.355262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.355387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.355416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.355544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.355571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.355709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.355739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.355902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.355931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.356087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.356131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.356268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.356297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.356450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.356480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.356633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.356661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.356824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.356850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.356960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.356987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.357161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.357200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.357382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.357409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.357548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.357574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.357688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.357719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.357874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.357903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.358081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.358123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.358235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.358261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.358396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.358422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.358559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.358601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.358796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.358850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.359026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.359052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.359171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.359197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.359342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.359370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.359508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.359534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.359696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.359725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.359926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.359954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.360119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.360146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.360263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.360290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.360427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.629 [2024-07-26 02:09:23.360455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.629 qpair failed and we were unable to recover it. 00:33:41.629 [2024-07-26 02:09:23.360648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.360674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.360812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.360839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.361001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.361031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.361228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.361255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.361382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.361411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.361533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.361562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.361740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.361769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.361922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.361950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.362084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.362110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.362250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.362276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.362458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.362486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.362638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.362671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.362851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.362881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.363005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.363033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.363243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.363282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.363395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.363422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.363533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.363558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.363670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.363696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.363873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.363924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.364077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.364119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.364279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.364305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.364412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.364438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.364620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.364648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.364796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.364827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.365008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.365035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.365160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.365186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.365322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.365348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.365480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.365506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.365686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.365714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.365861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.365891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.366072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.366115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.366246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.366272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.366402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.366430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.366599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.366625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.366756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.366798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.366958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.366988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.367138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.367165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.367304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.367345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.367528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.367561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.367699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.367724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.367837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.367863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.368019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.368048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.368215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.368241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.368377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.368403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.368536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.368564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.368698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-26 02:09:23.368724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.630 qpair failed and we were unable to recover it. 00:33:41.630 [2024-07-26 02:09:23.368861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.368887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.369069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.369112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.369253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.369280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.369419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.369445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.369562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.369589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.369726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.369752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.369905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.369931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.370096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.370124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.370298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.370324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.370431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.370472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.370585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.370626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.370795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.370821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.370923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.370963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.371102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.371143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.371308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.371336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.371474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.371500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.371658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.371684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.371841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.371885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.372019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.372077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.372238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.372265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.372372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.372398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.372534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.372575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.372710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.372737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.372889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.372915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.373046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.373078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.373229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.373256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.373384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.373410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.373540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.373566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.373708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.373735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.373858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.373884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.374021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.374047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.374201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.374228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.374373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.374398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.374505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.374531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.374659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.374699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.374804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.374829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.374988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.375029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.375140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.375182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.375297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.375324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.375453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.375479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.375619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.375645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.375802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.375828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.375976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.376003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.376128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.376158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.376293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.376319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.376429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.376454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.376594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.376628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.631 [2024-07-26 02:09:23.376794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.631 [2024-07-26 02:09:23.376819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.631 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.376993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.377019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.377192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.377219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.377353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.377379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.377490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.377515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.377673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.377699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.377844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.377870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.378029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.378055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.378187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.378228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.378364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.378391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.378552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.378578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.378727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.378753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.378873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.378899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.379020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.379046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.379191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.379217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.379377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.379403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.379554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.379581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.379748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.379775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.379902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.379929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.380041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.380080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.380203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.380230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.380383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.380409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.380536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.380577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.380758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.380784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.380897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.380923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.381102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.381129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.381268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.381299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.381423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.381449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.381562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.381587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.381769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.381796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.381997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.382024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.382175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.382201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.382308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.382334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.382448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.382475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.382616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.382642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.382780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.382806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.382998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.383023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.383136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.383162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.383297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.383323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.383482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.383508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.383688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.383715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.383886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.383930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.384070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.384098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.384237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.384262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.384430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.384474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.384616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.384643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.384783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.384808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.632 [2024-07-26 02:09:23.384954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.632 [2024-07-26 02:09:23.384980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.632 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.385120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.385146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.385256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.385298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.385436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.385463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.385640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.385665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.385815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.385841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.386003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.386039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.386201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.386228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.386364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.386406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.386553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.386581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.386740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.386766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.386944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.386973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.387123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.387163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.387298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.387325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.387434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.387460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.387581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.387607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.387758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.387783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.387897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.387924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.388118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.388157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.388325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.388353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.388515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.388543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.388715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.388742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.388881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.388908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.389047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.389083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.389272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.389299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.389436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.389462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.389565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.389590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.389743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.389769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.389947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.389973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.390152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.390179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.390336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.390361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.390499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.390524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.390633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.390659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.390791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.390825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.390958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.390984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.391099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.391125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.391264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.391290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.391395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.391421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.391570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.391615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.391769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.391796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.391934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.391961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.392072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.392114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.633 qpair failed and we were unable to recover it. 00:33:41.633 [2024-07-26 02:09:23.392234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.633 [2024-07-26 02:09:23.392263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.392443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.392468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.392584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.392609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.392768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.392794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.392942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.392971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.393139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.393165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.393328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.393373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.393527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.393554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.393692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.393718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.393875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.393901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.394012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.394038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.394187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.394225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.394362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.394392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.394522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.394548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.394659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.394684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.394823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.394865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.394970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.394996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.395132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.395183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.395336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.395375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.395567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.395594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.395751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.395780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.395966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.395993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.396146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.396172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.396352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.396380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.396578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.396605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.396739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.396765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.396877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.396902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.397076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.397106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.397261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.397287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.397422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.397464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.397605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.397657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.397791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.397816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.397952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.397978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.398137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.398192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.398360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.398387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.398516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.398545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.398719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.398745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.398888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.398915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.399076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.399119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.399233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.399260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.399424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.399449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.399554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.399598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.399748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.399777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.399966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.399992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.400177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.400206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.400391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.400418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.400578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.400604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.400709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.400735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.634 [2024-07-26 02:09:23.400832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.634 [2024-07-26 02:09:23.400858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.634 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.400988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.401014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.401125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.401152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.401291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.401318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.401477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.401503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.401632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.401658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.401790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.401815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.401969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.402009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.402133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.402161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.402289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.402315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.402476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.402508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.402616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.402642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.402801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.402827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.402930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.402957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.403097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.403124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.403285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.403311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.403466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.403509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.403702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.403731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.403861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.403887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.404023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.404049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.404187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.404216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.404417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.404459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.404624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.404695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.404806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.404831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.404976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.405001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.405125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.405154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.405298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.405328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.405560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.405603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.405740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.405765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.405901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.405927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.406069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.406094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.406250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.406293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.406419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.406471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.406606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.406633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.406762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.406787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.406920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.406946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.407095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.407121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.407277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.407316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.407484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.407510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.407654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.407681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.407823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.407849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.407989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.408015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.408169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.408212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.408348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.408390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.408572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.408621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.408763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.408815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.408964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.408990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.409106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.409132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.409268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.409294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.409421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.635 [2024-07-26 02:09:23.409449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.635 qpair failed and we were unable to recover it. 00:33:41.635 [2024-07-26 02:09:23.409616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.409644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.409767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.409796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.409943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.409972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.410086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.410128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.410293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.410319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.410580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.410630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.410806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.410835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.410983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.411012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.411173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.411213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.411379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.411423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.411624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.411650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.411845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.411887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.412028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.412054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.412207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.412233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.412396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.412450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.412621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.412649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.412786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.412811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.412945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.412971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.413087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.413113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.413276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.413302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.413416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.413442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.413554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.413580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.413721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.413747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.413891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.413916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.414025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.414050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.414164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.414190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.414307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.414346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.414485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.414531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.414711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.414738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.414864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.414889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.415030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.415056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.415172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.415198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.415327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.415355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.415501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.415526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.415691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.415717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.415828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.415853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.415958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.415984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.416135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.416180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.416335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.416378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.416532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.416561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.416717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.416744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.416895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.416921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.417085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.417112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.417237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.417281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.417399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.417426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.417537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.417564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.636 qpair failed and we were unable to recover it. 00:33:41.636 [2024-07-26 02:09:23.417676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.636 [2024-07-26 02:09:23.417702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.417813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.417840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.417961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.417987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.418129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.418155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.418277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.418315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.418460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.418487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.418596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.418622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.418759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.418785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.418894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.418924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.419037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.419070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.419202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.419229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.419339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.419365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.419474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.419500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.419612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.419638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.419757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.419783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.419920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.419946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.420081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.420109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.420264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.420309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.420495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.420539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.420668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.420697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.420831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.420857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.420968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.420995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.421133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.421163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.421300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.421344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.421516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.421544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.421660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.421689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.421812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.421842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.421965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.421994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.422198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.422227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.422371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.422414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.422606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.422650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.422785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.422811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.422947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.422973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.423131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.423174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.423353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.423380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.423570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.423604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.423732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.423758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.423875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.423902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.424016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.424042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.424210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.424236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.424363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.424391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.424504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.424532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.424673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.424700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.424838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.424864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.424981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.425007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.425129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.425156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.425321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.425347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.425470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.425496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.425607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.425634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.425776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.425819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.425928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.425954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.426063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.426090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.426204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.426231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.637 qpair failed and we were unable to recover it. 00:33:41.637 [2024-07-26 02:09:23.426415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.637 [2024-07-26 02:09:23.426459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.426613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.426641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.426836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.426863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.426976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.427001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.427119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.427145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.427279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.427306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.427407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.427434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.427543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.427569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.427684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.427710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.427854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.427887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.427998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.428024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.428166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.428193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.428371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.428415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.428569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.428612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.428769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.428814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.428926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.428953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.429090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.429117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.429265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.429309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.429497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.429540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.429667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.429712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.429820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.429847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.429956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.429982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.430097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.430123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.430248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.430275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.430413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.430440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.430549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.430576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.430685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.430711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.430821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.430848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.430989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.431016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.431171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.431196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.431330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.431356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.431483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.431510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.431707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.431733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.431845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.431871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.432012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.432038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.432172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.432198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.432338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.432368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.432494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.432521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.432643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.432686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.432827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.432853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.432990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.433016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.433183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.433209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.433319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.433361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.433461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.433488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.433627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.433653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.433786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.433813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.433932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.433958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.434112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.434151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.434272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.434302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.434436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.434481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.434672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.434716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.434868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.434910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.435024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.435081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.638 qpair failed and we were unable to recover it. 00:33:41.638 [2024-07-26 02:09:23.435214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.638 [2024-07-26 02:09:23.435245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.435363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.435392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.435542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.435571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.435718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.435746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.435869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.435897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.436023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.436051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.436198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.436226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.436335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.436363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.436528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.436555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.436697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.436723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.436860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.436891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.437006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.437033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.437194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.437222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.437349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.437394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.437555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.437599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.437762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.437791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.437950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.437976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.438135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.438185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.438314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.438359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.438523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.438552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.438709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.438735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.438852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.438880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.438995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.439021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.439203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.439232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.439357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.439387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.439514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.439543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.439656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.439685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.439823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.439848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.439950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.439975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.440087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.440113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.440275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.440303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.440522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.440551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.440671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.440700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.440809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.440838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.440958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.440986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.441121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.441147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.441322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.441348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.441456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.441487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.441600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.441627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.441735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.441761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.441902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.441928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.442051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.442091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.442238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.442265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.442400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.442427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.442553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.442579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.442693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.442721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.442862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.442889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.443025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.443051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.443216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.443242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.443343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.443384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.443514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.443539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.443702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.443728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.639 [2024-07-26 02:09:23.443832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.639 [2024-07-26 02:09:23.443859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.639 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.443963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.443990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.444144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.444170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.444292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.444317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.444482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.444525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.444664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.444690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.444828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.444854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.444986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.445012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.445148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.445174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.445279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.445304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.445409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.445451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.445611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.445637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.445762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.445793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.445962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.445988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.446113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.446140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.446250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.446275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.446395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.446421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.446625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.446651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.446763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.446789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.446945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.446971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.447091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.447116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.447248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.447274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.447430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.447457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.447586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.447612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.447722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.447749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.447883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.447910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.448032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.448057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.448170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.448195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.448309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.448351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.448468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.448509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.448620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.448646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.448786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.448812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.448948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.448987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.449116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.449155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.449303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.449331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.449462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.449491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.449667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.449697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.449823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.449856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.449984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.450014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.450187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.450218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.450374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.450404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.450526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.450555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.450688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.450734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.450873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.450902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.451051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.451087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.451270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.451296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.451416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.451445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.451563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.451591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.451707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.451736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.451879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.451907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.452046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.452076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.640 qpair failed and we were unable to recover it. 00:33:41.640 [2024-07-26 02:09:23.452212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.640 [2024-07-26 02:09:23.452238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.452347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.452388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.452548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.452577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.452702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.452746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.452869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.452897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.453012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.453041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.453215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.453254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.453388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.453434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.453591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.453634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.453753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.453797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.453902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.453928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.454034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.454070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.454209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.454236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.454371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.454398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.454531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.454557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.454671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.454703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.454849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.454875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.455016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.455041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.455186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.455212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.455342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.455368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.455525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.455551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.455682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.455710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.455864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.455892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.456054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.456104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.456245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.456272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.456384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.456426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.456585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.456613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.456760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.456789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.456915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.456944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.457111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.457139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.457256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.457282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.457456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.457483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.457598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.457626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.457749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.457776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.457892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.457918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.458025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.458052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.458230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.458256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.458392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.458418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.458568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.458595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.458711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.458737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.458906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.458933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.459071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.459097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.459233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.459263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.459369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.459395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.459548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.459574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.459718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.459747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.459898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.459926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.460056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.460120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.460275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.460304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.460434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.460463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.460609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.460640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.460849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.460881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.461009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.461039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.461233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.461259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.461410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.461439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.641 [2024-07-26 02:09:23.461589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.641 [2024-07-26 02:09:23.461618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.641 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.461770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.461799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.461930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.461969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.462111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.462139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.462275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.462320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.462433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.462459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.462620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.462646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.462779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.462824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.462965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.462992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.463111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.463137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.463265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.463290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.463490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.463535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.463709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.463737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.463856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.463885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.464072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.464105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.464247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.464273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.464427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.464471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.464623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.464668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.464820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.464863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.464998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.465024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.465168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.465197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.465349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.465378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.465570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.465618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.465848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.465895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.466050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.466085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.466219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.466244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.466416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.466460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.466582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.466612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.466767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.466796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.466923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.466949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.467080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.467106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.467254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.467293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.467449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.467495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.467647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.467678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.467796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.467825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.468010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.468037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.468176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.468202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.468347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.468376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.468533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.468563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.468703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.468746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.468902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.468932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.469100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.469130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.469271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.469297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.469430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.469472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.469623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.469652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.469827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.469857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.469977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.470006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.470141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.470168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.470303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.470330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.470494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.470537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.470749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.470778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.470924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.470953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.471118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.471144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.471252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.471278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.471412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.471437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.471611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.471640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.642 qpair failed and we were unable to recover it. 00:33:41.642 [2024-07-26 02:09:23.471848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.642 [2024-07-26 02:09:23.471877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.472008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.472039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.472205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.472232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.472371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.472398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.472575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.472605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.472768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.472813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.472948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.472991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.473140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.473166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.473283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.473321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.473471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.473498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.473663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.473705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.473850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.473879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.474045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.474078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.474245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.474271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.474429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.474458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.474618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.474644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.474863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.474904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.475055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.475089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.475218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.475244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.475425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.475454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.475610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.475635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.475776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.475801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.475985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.476014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.476156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.476182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.476296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.476321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.476458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.476488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.476600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.476626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.476758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.476783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.476935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.476974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.477103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.477141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.477252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.477279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.477398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.477425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.477580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.477624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.477776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.477802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.477916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.477942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.478072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.478101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.478233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.478259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.478400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.478425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.478561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.478587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.478722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.478748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.478884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.478912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.479067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.479106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.479251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.479279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.479417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.479444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.479581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.479608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.479742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.479769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.479892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.479935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.480136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.480163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.480302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.480328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.480443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.480469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.480605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.480631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.643 qpair failed and we were unable to recover it. 00:33:41.643 [2024-07-26 02:09:23.480763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.643 [2024-07-26 02:09:23.480789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.480941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.480975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.481107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.481133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.481244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.481270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.481378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.481403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.481515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.481542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.481679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.481705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.481876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.481919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.482106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.482138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.482274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.482302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.482437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.482464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.482574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.482601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.482735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.482762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.482862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.482888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.483025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.483052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.483170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.483197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.483362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.483389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.483526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.483555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.483689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.483716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.483892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.483920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.484043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.484078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.484230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.484257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.484393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.484419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.484597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.484623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.484760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.484785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.484922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.484951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.485090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.485119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.485294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.485320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.485461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.485504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.485665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.485714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.485880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.485905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.486015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.486041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.486194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.486233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.486372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.486399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.486531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.486556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.486691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.486716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.486856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.486882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.487006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.487037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.487175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.487202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.487335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.487361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.487496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.487521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.487700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.487730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.487863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.487888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.488009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.488072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.488214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.488241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.488356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.488382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.488547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.488573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.488676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.488702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.488805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.488831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.488979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.489018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.489194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.489222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.489356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.489382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.489524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.489566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.489791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.489840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.489968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.490012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.490201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.490240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.490386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.490433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.490555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.644 [2024-07-26 02:09:23.490582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.644 qpair failed and we were unable to recover it. 00:33:41.644 [2024-07-26 02:09:23.490742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.490785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.490906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.490935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.491114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.491141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.491320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.491349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.491468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.491497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.491625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.491652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.491783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.491809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.491958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.491986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.492122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.492149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.492284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.492329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.492490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.492522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.492683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.492709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.492881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.492910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.493045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.493083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.493222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.493248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.493364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.493391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.493562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.493592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.493719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.493745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.493908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.493951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.494095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.494125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.494258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.494284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.494444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.494470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.494603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.494632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.494764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.494790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.494957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.494986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.495147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.495173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.495306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.495333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.495486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.495518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.495665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.495708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.495840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.495866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.496002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.496029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.496144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.496170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.496304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.496330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.496440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.496466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.496601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.496627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.496759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.496784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.496920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.496964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.497092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.497121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.497251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.497276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.497415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.497442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.497603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.497644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.497805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.497832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.497967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.497999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.498158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.498185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.498326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.498353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.498522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.498574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.498762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.498818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.498970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.498996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.499116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.499168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.499349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.499378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.499528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.499558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.499669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.499695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.499848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.499874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.500032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.500057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.500174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.500199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.645 qpair failed and we were unable to recover it. 00:33:41.645 [2024-07-26 02:09:23.500332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.645 [2024-07-26 02:09:23.500359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.500508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.500534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.500663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.500705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.500856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.500882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.501017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.501043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.501158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.501184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.501326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.501351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.501457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.501483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.501598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.501623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.501761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.501788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.501932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.501961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.502123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.502150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.502311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.502352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.502535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.502561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.502663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.502705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.502827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.502855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.503008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.503034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.503179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.503205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.503331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.503360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.503522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.503547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.503682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.503708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.503856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.503884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.504071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.504097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.504225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.504265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.504435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.504463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.504667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.504694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.504823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.504854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.505003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.505033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.505177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.505205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.505344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.505370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.505556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.505584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.505719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.505745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.505886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.505913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.506067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.506112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.506246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.506271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.506406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.506454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.506570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.506599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.506738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.506764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.506871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.506897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.507103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.507146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.507318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.507345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.507455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.507497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.507664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.507708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.507844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.507870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.508031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.508085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.508221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.508246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.508350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.508376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.508517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.508542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.508667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.508696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.508855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.508883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.509027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.509056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.509229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.509255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.509364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.646 [2024-07-26 02:09:23.509390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.646 qpair failed and we were unable to recover it. 00:33:41.646 [2024-07-26 02:09:23.509490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.509516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.509685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.509711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.509821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.509847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.509978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.510004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.510131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.510158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.510290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.510316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.510425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.510452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.510610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.510638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.510804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.510830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.510965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.511013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.511150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.511176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.511284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.511311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.511474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.511500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.511641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.511667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.511798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.511823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.512001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.512030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.512195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.512221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.512350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.512375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.512550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.512578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.512696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.512738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.512868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.512897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.513051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.513085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.513207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.513233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.513348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.513375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.513514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.513556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.513670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.513700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.513836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.513862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.513976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.514002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.514151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.514177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.514307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.514333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.514510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.514539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.514699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.514725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.514865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.514891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.515020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.515046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.515190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.515216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.515354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.515380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.515478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.515504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.515650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.515679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.515828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.515854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.516011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.516039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.516237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.516263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.516401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.516427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.516565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.516591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.516759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.516785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.516935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.516964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.517128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.517155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.517317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.517358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.517510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.517535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.517646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.517673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.517839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.517867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.518070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.518114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.518248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.518274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.518410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.518436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.518543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.518569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.518698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.518724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.518888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.518917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.519085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.519114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.519258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.519297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.519464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.519509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.519640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.647 [2024-07-26 02:09:23.519668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.647 qpair failed and we were unable to recover it. 00:33:41.647 [2024-07-26 02:09:23.519783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.519826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.519992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.520034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.520192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.520220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.520360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.520387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.520540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.520583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.520742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.520768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.520950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.520979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.521151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.521177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.521282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.521308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.521442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.521468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.521683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.521737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.521885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.521913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.522064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.522109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.522249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.522275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.522405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.522431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.522546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.522572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.522734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.522763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.522917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.522945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.523093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.523149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.523298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.523326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.523468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.523494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.523656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.523700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.523868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.523895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.524039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.524070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.524232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.524258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.524430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.524473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.524639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.524667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.524832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.524886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.525040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.525079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.525214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.525240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.525379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.525410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.525582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.525608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.525717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.525743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.525849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.525875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.526054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.526120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.526243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.526270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.526403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.526428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.526594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.526649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.526829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.526855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.526962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.527004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.527146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.527172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.527310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.527336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.527445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.527485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.527668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.527720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.527908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.527934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.528108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.528138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.528256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.528285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.528461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.528487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.528625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.528650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.528753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.528778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.528939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.528964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.529070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.529114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.529288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.529316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.529458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.529484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.529653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.529679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.529821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.529851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.529982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.530025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.530200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.530230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.530369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.530394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.530552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.530578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.530756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.530803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.648 [2024-07-26 02:09:23.530954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.648 [2024-07-26 02:09:23.530980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.648 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.531104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.531130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.531261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.531287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.531442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.531485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.531653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.531679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.531813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.531855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.531993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.532048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.532205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.532233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.532389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.532415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.532580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.532609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.532754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.532781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.532911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.532937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.533066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.533107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.533241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.533268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.533402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.533447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.533638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.533664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.533829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.533855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.534026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.534075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.534240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.534268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.534410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.534436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.534546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.534587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.534734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.534785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.534937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.534966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.535105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.535137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.535302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.535327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.535499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.535525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.535656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.535698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.535841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.535869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.536030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.536056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.536201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.536227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.536325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.536368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.536532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.536559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.536700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.536743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.536898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.536924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.537070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.537096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.537201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.537227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.537342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.537367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.537514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.537541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.537674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.537700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.537804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.537830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.537957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.537995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.538111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.538139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.538250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.538276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.538439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.538465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.538577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.538603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.538769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.538807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.538952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.538980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.539105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.539131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.539269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.539295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.539501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.539556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.539754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.539805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.539958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.539984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.540100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.540128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.540244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.540269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.540404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.649 [2024-07-26 02:09:23.540429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.649 qpair failed and we were unable to recover it. 00:33:41.649 [2024-07-26 02:09:23.540563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.540606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.540795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.540848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.540982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.541008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.541150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.541176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.541340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.541382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.541525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.541570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.541786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.541816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.541932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.541962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.542093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.542126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.542267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.542292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.542424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.542453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.542582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.542624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.542799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.542828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.542944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.542972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.543105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.543131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.543296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.543322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.543474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.543525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.543731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.543760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.543911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.543939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.544075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.544101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.544231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.544260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.544430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.544458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.544640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.544668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.544813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.544841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.544997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.545025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.545164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.545190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.545356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.545381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.545506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.545536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.545659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.545688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.545857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.545885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.546073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.546115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.546221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.546247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.546409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.546434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.546594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.546624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.546780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.546822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.547005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.547030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.547170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.547196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.547339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.547365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.547558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.547607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.547784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.547835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.547978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.548008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.548139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.548165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.548270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.548296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.548532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.548560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.548775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.548829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.548951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.548980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.549143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.549168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.549329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.549354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.549504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.549533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.549702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.549765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.549940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.549969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.550083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.550125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.550233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.550259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.550416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.550474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.550721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.550772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.550945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.550973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.551145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.551185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.551339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.551379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.551548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.551578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.551766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.551811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.552005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.650 [2024-07-26 02:09:23.552033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.650 qpair failed and we were unable to recover it. 00:33:41.650 [2024-07-26 02:09:23.552173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.552199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.552311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.552337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.552490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.552518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.552716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.552773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.552914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.552942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.553067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.553093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.553253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.553278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.553431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.553459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.553592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.553635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.553810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.553839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.553985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.554013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.554153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.554179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.554288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.554313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.554463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.554491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.554621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.554667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.554791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.554821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.554979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.555006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.555168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.555194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.555302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.555327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.555534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.555563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.555799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.555828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.555974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.556004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.556141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.556168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.556308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.556333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.556484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.556512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.556662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.556690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.556843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.556871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.556992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.557020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.557184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.557210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.557379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.557436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.557626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.557670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.557794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.557837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.557975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.558001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.558162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.558207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.558362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.558406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.558561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.558604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.558714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.558740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.558888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.558927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.559116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.559147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.559270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.559300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.559533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.559583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.559763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.559792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.559938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.559967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.560124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.560153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.560306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.560335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.560510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.560557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.560705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.560748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.560884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.560910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.561019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.561045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.561172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.561216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.561398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.561441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.561588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.561631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.561766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.561792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.561943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.561982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.562094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.562127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.562261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.562287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.562474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.562528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.562683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.562709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.562938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.562987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.563123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.563148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.563310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.563336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.563493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.563522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.563676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.651 [2024-07-26 02:09:23.563704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.651 qpair failed and we were unable to recover it. 00:33:41.651 [2024-07-26 02:09:23.563838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.563864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.564030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.564056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.564200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.564226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.564385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.564414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.564574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.564600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.564771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.564800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.564934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.564959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.565097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.565123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.565260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.565285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.565420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.565449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.565597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.565625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.565777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.565807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.565930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.565959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.566116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.566142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.566291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.566330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.566491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.566535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.566660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.566704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.566835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.566879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.566993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.567025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.567231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.567284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.567469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.567498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.567655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.567684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.567816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.567841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.567949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.567975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.568108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.568134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.568316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.568345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.568496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.568525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.568671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.568700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.568819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.568848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.569018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.569047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.569207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.569233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.569386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.569431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.569600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.569643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.569836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.569880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.569990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.570016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.570187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.570232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.570389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.570431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.570566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.652 [2024-07-26 02:09:23.570596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.652 qpair failed and we were unable to recover it. 00:33:41.652 [2024-07-26 02:09:23.570773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.570801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.570963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.570989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.571091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.571117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.571247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.571273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.571457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.571486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.571612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.571666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.571884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.571931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.572088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.572135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.572272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.572298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.572400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.572442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.572597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.572626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.572803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.572832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.572979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.573008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.573172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.573199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.573332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.573357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.573509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.573538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.573714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.573742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.573870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.573912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.574070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.574096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.574198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.574224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.574407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.574436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.574583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.574612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.574763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.574791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.574955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.574994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.575119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.575157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.575299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.575326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.575488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.575518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.575740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.575794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.575926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.575955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.576081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.576125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.576237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.576262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.576403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.576445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.576573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.576601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.653 [2024-07-26 02:09:23.576725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.653 [2024-07-26 02:09:23.576768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.653 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.576915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.576948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.577153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.577192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.577361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.577387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.577545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.577575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.577770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.577799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.577961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.577988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.578095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.578121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.578284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.578310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.578468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.578497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.578657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.578685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.578835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.578863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.579005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.579044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.579181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.579209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.579369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.579413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.579615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.579660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.579815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.579859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.579992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.580018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.580164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.580191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.580323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.580365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.580483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.580511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.580735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.580763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.580911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.580939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.581088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.581114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.581244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.581274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.581416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.581445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.581659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.581708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.581824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.581852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.582028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.582066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.582191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.582217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.582432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.582483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.582635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.582664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.582811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.582839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.583008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.654 [2024-07-26 02:09:23.583047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.654 qpair failed and we were unable to recover it. 00:33:41.654 [2024-07-26 02:09:23.583206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.583234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.583371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.583417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.583569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.583611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.583739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.583784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.583887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.583913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.584026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.584053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.584167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.584193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.584325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.584351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.584490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.584516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.584645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.584671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.584776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.584801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.584902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.584928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.585039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.585070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.585216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.585242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.585365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.585393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.585578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.585606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.585767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.585796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.585929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.585957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.586123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.586150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.586277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.586320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.586475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.586517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.586668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.586717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.586858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.586885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.586995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.587022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.587149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.587175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.587311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.587352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.587519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.587561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.587705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.587735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.587879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.587908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.588071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.588097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.588230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.588255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.588410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.655 [2024-07-26 02:09:23.588438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.655 qpair failed and we were unable to recover it. 00:33:41.655 [2024-07-26 02:09:23.588584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.588612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.588786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.588814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.588948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.588973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.589112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.589138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.589245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.589270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.589457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.589486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.589660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.589688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.589839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.589867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.590034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.590064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.590197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.590223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.590361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.590404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.590556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.590584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.590758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.590786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.590935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.590963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.591122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.591149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.591287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.591312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.591471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.591503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.591652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.591681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.591802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.591830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.592000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.592029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.592163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.592189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.592329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.592354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.592534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.592562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.592677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.592705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.592851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.592879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.593026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.593054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.593198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.593223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.593361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.593386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.593492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.593518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.593639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.593664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.593799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.593828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.594004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.594032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.594188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.656 [2024-07-26 02:09:23.594214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.656 qpair failed and we were unable to recover it. 00:33:41.656 [2024-07-26 02:09:23.594395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.594450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.594601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.594646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.594828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.594872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.595009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.595035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.595172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.595203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.595411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.595454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.595666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.595716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.595848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.595874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.595991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.596019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.596180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.596210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.596368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.596401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.596526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.596554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.596698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.596727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.596877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.596905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.597071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.597097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.597230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.597256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.597382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.597410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.597583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.597611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.597728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.597756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.597929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.597958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.598119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.598145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.598285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.598313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.598446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.598475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.598646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.598689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.598853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.598882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.599037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.657 [2024-07-26 02:09:23.599069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.657 qpair failed and we were unable to recover it. 00:33:41.657 [2024-07-26 02:09:23.599204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.599230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.599384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.599431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.599589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.599633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.599796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.599840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.599954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.599982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.600124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.600150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.600306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.600334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.600479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.600507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.600619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.600648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.600775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.600817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.600990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.601016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.601158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.601189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.601318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.601346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.601485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.601513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.601633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.601662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.601837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.601865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.602018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.602045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.602188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.602214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.602323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.602366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.602534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.602563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.602695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.602736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.602860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.602889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.603044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.603077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.603201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.603227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.603352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.603384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.603541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.603570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.603716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.603745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.603889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.603917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.604081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.604107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.604223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.604249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.604404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.604433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.604615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.604644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.658 qpair failed and we were unable to recover it. 00:33:41.658 [2024-07-26 02:09:23.604778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.658 [2024-07-26 02:09:23.604803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.604937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.604966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.605116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.605142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.605263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.605289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.605451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.605479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.605633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.605662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.605780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.605813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.605957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.605983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.606100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.606127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.606264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.606290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.606450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.606479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.606655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.606684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.606863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.606891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.607029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.607063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.607187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.607213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.607353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.607379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.607504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.607533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.607679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.607707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.607858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.607886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.608011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.608040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.608243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.608283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.608432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.608460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.608644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.608687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.608813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.608856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.608999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.609025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.609165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.609191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.609380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.609424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.609551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.609595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.609704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.609730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.609870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.609896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.610004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.610030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.610170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.610197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.610353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.610402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.659 [2024-07-26 02:09:23.610576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.659 [2024-07-26 02:09:23.610608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.659 qpair failed and we were unable to recover it. 00:33:41.660 [2024-07-26 02:09:23.610747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.660 [2024-07-26 02:09:23.610774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.660 qpair failed and we were unable to recover it. 00:33:41.660 [2024-07-26 02:09:23.610885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.660 [2024-07-26 02:09:23.610912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.660 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-26 02:09:23.611098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-26 02:09:23.611127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-26 02:09:23.611289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-26 02:09:23.611317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-26 02:09:23.611449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-26 02:09:23.611478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-26 02:09:23.611640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-26 02:09:23.611666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-26 02:09:23.611778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-26 02:09:23.611804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-26 02:09:23.611940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-26 02:09:23.611965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-26 02:09:23.612097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-26 02:09:23.612123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-26 02:09:23.612274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-26 02:09:23.612303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-26 02:09:23.612464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-26 02:09:23.612490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-26 02:09:23.612637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-26 02:09:23.612679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-26 02:09:23.612827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.612855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.613035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.613067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.613182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.613208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.613335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.613363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.613517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.613546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.613719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.613747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.613869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.613898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.614044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.614076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.614220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.614246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.614372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.614402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.614518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.614546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.614703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.614731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.614848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.614877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.615074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.615132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.615300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.615350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.615511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.615555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.615739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.615783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.615895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.615922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.616037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.616073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.616183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.616210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.616344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.616371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.616483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.616510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.616619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.616646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.616752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.616777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.616892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.616919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.617024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.617049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.617221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.617250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.617397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.617426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.617579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.617607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.617751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.617779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.617925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.617954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.618112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.618140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.618281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.618326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.618450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.618495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.618649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.618692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.618800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.618825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.618939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.618967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.619104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.619131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.619270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.619296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.619401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.619426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.619563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-26 02:09:23.619589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-26 02:09:23.619727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.619757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.619870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.619895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.620036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.620071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.620209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.620235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.620393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.620419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.620579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.620622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.620749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.620779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.620908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.620934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.621074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.621120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.621267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.621296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.621479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.621507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.621661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.621690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.621830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.621859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.621991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.622017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.622166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.622191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.622352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.622381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.622533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.622563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.622738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.622767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.622910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.622939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.623106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.623132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.623243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.623285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.623436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.623465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.623614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.623642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.623765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.623794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.623910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.623939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.624082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.624125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.624266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.624294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.624432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.624482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.624655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.624682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.624817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.624844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.624982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.625008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.625176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.625220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.625355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.625398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.625509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.625536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.625669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.625713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.625854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.625881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.626017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.626042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.626209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.626237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.626361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.626390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.626509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.626538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.626689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.626719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.626877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.626905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.627072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.627099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.627290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.627333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.627494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.627537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.627695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.627738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-26 02:09:23.627898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-26 02:09:23.627924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.628093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.628119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.628277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.628322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.628478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.628522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.628709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.628753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.628915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.628942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.629106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.629136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.629279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.629306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.629447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.629473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.629582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.629607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.629712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.629738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.629905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.629931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.630035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.630066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.630202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.630231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.630370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.630399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.630546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.630574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.630748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.630794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.630941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.630967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.631081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.631108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.631264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.631308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.631465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.631508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.631692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.631736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.631905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.631931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.632070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.632096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.632252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.632297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.632520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.632565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.632761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.632787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.632893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.632919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.633055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.633089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.633205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.633231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.633341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.633367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.633531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.633557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.633665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.633691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.633803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.633829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.633988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.634014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.634168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.634195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.634361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.634386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.634505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.634533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.634694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.634720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.634858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.634883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.635020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.635046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.635190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.635215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.635354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.635382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.635554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-26 02:09:23.635583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-26 02:09:23.635722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.635750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.635901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.635929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.636084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.636110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.636245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.636270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.636395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.636428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.636602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.636631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.636773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.636801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.636948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.636977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.637146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.637186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.637306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.637333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.637457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.637500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.637647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.637692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.637880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.637923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.638034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.638070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.638237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.638264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.638380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.638405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.638515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.638541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.638698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.638726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.638876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.638904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.639053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.639110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.639235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.639264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.639420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.639448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.639593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.639621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.639797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.639826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.639948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.639974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.640135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.640161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.640293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.640319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.640474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.640502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.640625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.640654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.640781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.640824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.640995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.641023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.641193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.641223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.641361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.641387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.641490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.641531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.641705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.641733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.641971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.641999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.642142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.642168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-26 02:09:23.642303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-26 02:09:23.642328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.642482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.642510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.642628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.642656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.642801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.642829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.642990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.643016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.643124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.643150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.643258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.643284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.643440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.643469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.643594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.643622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.643769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.643797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.643927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.643955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.644122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.644148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.644285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.644311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.644495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.644524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.644639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.644667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.644811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.644839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.645011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.645039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.645199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.645224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.645336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.645361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.645521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.645546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.645705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.645733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.645886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.645914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.646095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.646121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.646261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.646287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.646456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.646484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.646614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.646657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.646801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.646829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.646952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.646981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.647138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.647164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.647297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.647322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.647505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.647533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.647681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.647710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.647855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.647884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.648010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.648035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.648203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.648229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.648378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.648407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.648522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.648551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.648727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.648755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.648898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.648926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.649090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.649117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.649277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.649303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.649454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.649482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.649636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.649664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.649905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.649934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.650086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.650133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.650244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.650269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.650416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.650444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-26 02:09:23.650599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-26 02:09:23.650625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.650786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.650814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.650970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.650996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.651099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.651125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.651235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.651260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.651445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.651474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.651614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.651642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.651766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.651794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.651953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.651992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.652135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.652163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.652291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.652335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.652569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.652619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.652767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.652795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.652929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.652956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.653113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.653142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.653297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.653330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.653447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.653476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.653611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.653636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.653776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.653802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.653910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.653936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.654114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.654140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.654247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.654273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.654403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.654429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.654588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.654616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.654766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.654794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.654941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.654969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.655090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.655116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.655251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.655276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.655437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.655466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.655648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.655702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.655837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.655880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.656005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.656033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.656194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.656220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.656380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.656421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.656535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.656563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.656687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.656729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.656901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.656930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.657073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.657117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.657253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.657278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.657428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.657456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.657603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.657632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.657764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.657808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.657967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.658000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.658146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.658172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.658275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.658316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.658456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-26 02:09:23.658485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-26 02:09:23.658659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.658687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.658796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.658824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.658973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.659001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.659137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.659163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.659317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.659346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.659486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.659515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.659667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.659695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.659914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.659970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.660122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.660151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.660280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.660324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.660509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.660572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.660727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.660771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.660911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.660937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.661095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.661126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.661273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.661302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.661423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.661451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.661599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.661629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.661776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.661804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.661948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.661977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.662121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.662150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.662300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.662329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.662505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.662534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.662749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.662778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.662926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.662962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.663128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.663155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.663312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.663341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.663481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.663510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.663633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.663661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.663787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.663833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.663990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.664017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.664162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.664189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.664313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.664357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.664474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.664500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.664611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.664637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.664775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.664802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.664919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.664944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.665082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.665108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.665246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.665272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.665382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.665408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.665542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.665568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.665751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.665780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.665930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.665959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.666093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.666119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.666258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.953 [2024-07-26 02:09:23.666283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.953 qpair failed and we were unable to recover it. 00:33:41.953 [2024-07-26 02:09:23.666398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.666440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.666592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.666620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.666833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.666862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.666979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.667007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.667192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.667218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.667367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.667396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.667616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.667649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.667805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.667833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.667959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.667985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.668134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.668160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.668293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.668319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.668436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.668465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.668607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.668635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.668783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.668811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.668968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.668994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.669105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.669131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.669242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.669267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.669414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.669442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.669589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.669618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.669752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.669777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.669981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.670007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.670131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.670157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.670263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.670289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.670434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.670462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.670589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.670617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.670765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.670794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.670964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.671002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.671125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.671153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.671283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.671328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.671456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.671486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.671611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.671638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.671780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.671807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.671918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.671945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.672052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.672089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.672206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.672232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.672342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.672367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.672476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.672502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.672608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.672634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.672767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.672812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.672927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.672954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.673073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.954 [2024-07-26 02:09:23.673101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.954 qpair failed and we were unable to recover it. 00:33:41.954 [2024-07-26 02:09:23.673257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.673301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.673442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.673471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.673611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.673641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.673773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.673799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.673935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.673960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.674073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.674117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.674245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.674273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.674418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.674447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.674590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.674619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.674815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.674847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.674999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.675026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.675183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.675228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.675389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.675432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.675614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.675643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.675768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.675795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.675931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.675958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.676072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.676098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.676211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.676236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.676384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.676413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.676562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.676595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.676717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.676746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.676904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.676929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.677037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.677069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.677197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.677226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.677379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.677408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.677555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.677584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.677730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.677758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.677919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.677947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.678080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.678107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.678260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.678305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.678497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.678527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.678645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.678671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.678783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.678810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.678921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.678947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.679135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.679164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.679285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.679313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.679462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.679490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.679611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.679640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.679761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.679791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.679917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.679942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.680078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.680104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.680268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.680293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.680454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.680483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.680659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.680687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.680832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.680860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.680994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.681020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.681162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.955 [2024-07-26 02:09:23.681191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.955 qpair failed and we were unable to recover it. 00:33:41.955 [2024-07-26 02:09:23.681320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.681362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.681536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.681587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.681720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.681762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.681885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.681915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.682070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.682096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.682204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.682230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.682386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.682414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.682564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.682593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.682738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.682767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.682885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.682913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.683092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.683119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.683226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.683252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.683385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.683415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.683610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.683636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.683798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.683826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.683947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.683989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.684096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.684122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.684286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.684311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.684478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.684504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.684662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.684690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.684838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.684866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.685029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.685054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.685197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.685223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.685353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.685397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.685549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.685577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.685714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.685757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.685899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.685932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.686080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.686124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.686232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.686258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.686361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.686387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.686551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.686580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.686725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.686753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.686881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.686924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.687086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.687128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.687245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.687271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.687388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.687413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.687542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.687571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.687720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.687749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.687865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.687894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.688039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.688073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.688219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.688258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.688390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.688420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.688617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.688645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.688775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.688820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.688933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.688959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.689078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.689106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.689219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.689246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.689351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.689377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.689521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.689546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.689656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.956 [2024-07-26 02:09:23.689681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.956 qpair failed and we were unable to recover it. 00:33:41.956 [2024-07-26 02:09:23.689815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.689841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.689949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.689975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.690120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.690148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.690280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.690311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.690440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.690489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.690645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.690690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.690816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.690842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.690959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.690986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.691119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.691146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.691287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.691313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.691423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.691449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.691606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.691634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.691786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.691814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.691964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.691993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.692149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.692196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.692329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.692373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.692507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.692551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.692690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.692734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.692841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.692867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.693006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.693031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.693192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.693221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.693375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.693404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.693558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.693586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.693758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.693786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.693934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.693962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.694126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.694152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.694292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.694317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.694427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.694453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.694576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.694602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.694754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.694799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.694908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.694938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.695049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.695086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.695184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.695211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.695316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.695342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.695475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.695501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.695636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.695663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.695772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.695798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.695932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.695959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.696073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.696101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.696209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.696235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.696367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.696393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.696522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.696547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.696646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.696672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.696821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.696850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.696979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.697004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.697163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.697189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.697310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.697338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.697462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.957 [2024-07-26 02:09:23.697491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.957 qpair failed and we were unable to recover it. 00:33:41.957 [2024-07-26 02:09:23.697637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.697665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.697813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.697841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.698022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.698047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.698162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.698187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.698332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.698375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.698520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.698548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.698699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.698728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.698878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.698906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.699030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.699055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.699203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.699233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.699351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.699377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.699509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.699538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.699689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.699719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.699863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.699892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.700020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.700046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.700160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.700186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.700290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.700316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.700503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.700532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.700679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.700708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.700827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.700856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.701023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.701070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.701186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.701213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.701327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.701353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.701494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.701523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.701696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.701740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.701891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.701918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.702055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.702089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.702217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.702261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.702373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.702400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.702561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.702588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.702694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.702720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.702859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.702886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.702993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.703020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.703170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.703215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.703342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.703386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.703571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.703615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.703738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.703770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.703878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.703904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.704047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.704083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.704235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.704280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.704437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.704480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.704612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.704656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.958 [2024-07-26 02:09:23.704775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.958 [2024-07-26 02:09:23.704801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.958 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.704905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.704931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.705077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.705106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.705217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.705243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.705362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.705387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.705523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.705548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.705656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.705681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.705816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.705842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.705959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.705986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.706146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.706190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.706318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.706366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.706536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.706563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.706671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.706697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.706823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.706849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.706958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.706985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.707094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.707120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.707269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.707295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.707403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.707429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.707568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.707593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.707714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.707742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.707875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.707901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.708011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.708042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.708181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.708207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.708349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.708378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.708505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.708547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.708721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.708750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.708872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.708900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.709020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.709048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.709179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.709205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.709331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.709359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.709507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.709536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.709658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.709686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.709796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.709825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.709945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.709973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.710111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.710137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.710284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.710310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.710421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.710462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.710579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.710607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.710748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.710776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.710906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.710932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.711069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.711095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.711207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.711233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.711371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.711396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.711552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.711581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.711740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.711769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.711915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.711944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.959 qpair failed and we were unable to recover it. 00:33:41.959 [2024-07-26 02:09:23.712071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.959 [2024-07-26 02:09:23.712114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.712226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.712252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.712385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.712415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.712574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.712603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.712758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.712787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.712934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.712962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.713120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.713147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.713245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.713271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.713405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.713431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.713576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.713605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.713752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.713780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.713904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.713933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.714069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.714096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.714196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.714222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.714361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.714403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.714526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.714554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.714765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.714793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.714913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.714942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.715076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.715103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.715244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.715270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.715382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.715426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.715543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.715571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.715706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.715750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.715871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.715899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.716082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.716125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.716230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.716258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.716370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.716396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.716527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.716556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.716676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.716718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.716867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.716900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.717023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.717051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.717215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.717241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.717366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.717395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.717549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.717578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.717700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.717742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.717886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.717915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.718056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.718107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.718221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.718247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.718357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.718383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.718483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.718527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.718662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.718704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.960 qpair failed and we were unable to recover it. 00:33:41.960 [2024-07-26 02:09:23.718861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.960 [2024-07-26 02:09:23.718890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.719006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.719035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.719196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.719225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.719341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.719370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.719495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.719524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.719635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.719663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.719788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.719818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.719956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.719984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.720153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.720210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.720384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.720411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.720541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.720586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.720732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.720759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.720921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.720947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.721114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.721141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.721279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.721309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.721428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.721461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.721590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.721615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.721766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.721794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.721916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.721941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.722072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.722098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.722212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.722237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.722385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.722413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.722536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.722564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.722687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.722716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.722860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.722888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.723012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.723037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.723148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.723174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.723304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.723330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.723456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.723484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.723626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.723654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.723830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.723858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.723996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.724022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.724134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.724160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.724264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.724290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.724479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.724507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.724705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.724733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.724857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.724885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.725030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.725066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.725219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.725244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.725342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.725368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.725500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.725528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.725673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.725701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.725889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.725917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.726053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.726086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.726194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.726220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.726339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.726367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.726486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.726514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.726639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.726668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.726789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.726817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.726966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.726994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.727125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.727152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.961 qpair failed and we were unable to recover it. 00:33:41.961 [2024-07-26 02:09:23.727286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.961 [2024-07-26 02:09:23.727314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.727473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.727499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.727627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.727656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.727766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.727794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.727943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.727973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.728129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.728159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.728279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.728305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.728464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.728492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.728665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.728693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.728812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.728841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.728971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.728997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.729114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.729141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.729302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.729328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.729462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.729491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.729616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.729658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.729832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.729860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.729997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.730023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.730136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.730162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.730300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.730325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.730473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.730498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.730626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.730655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.730812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.730840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.731014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.731042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.731187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.731213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.731326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.731351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.731451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.731493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.731640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.731668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.731812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.731841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.731962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.731990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.732163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.732201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.732371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.732419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.732599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.732629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.732806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.732860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.732994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.733020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.733193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.733220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.733348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.733393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.733521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.733564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.733692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.733735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.733867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.733894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.734008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.734033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.734178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.734204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.734307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.734332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.734487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.734515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.734648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.734676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.734798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.734826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.734994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.962 [2024-07-26 02:09:23.735021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.962 qpair failed and we were unable to recover it. 00:33:41.962 [2024-07-26 02:09:23.735173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.735199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.735353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.735380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.735490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.735517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.735638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.735666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.735806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.735833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.735953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.735981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.736136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.736163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.736284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.736312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.736470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.736496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.736636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.736664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.736814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.736841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.736957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.736984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.737113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.737139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.737294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.737344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.737507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.737549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.737673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.737721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.737858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.737885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.738023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.738050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.738177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.738204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.738317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.738344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.738503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.738530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.738649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.738678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.738813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.738840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.738982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.739008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.739122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.739147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.739302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.739329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.739479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.739507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.739652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.739680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.739890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.739936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.740048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.740084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.740242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.740285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.740411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.740439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.740606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.740648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.740810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.740835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.740941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.740967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.741103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.741129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.741259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.741285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.741453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.741479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.741633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.741660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.741775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.741802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.741964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.741991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.742106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.742133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.742284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.742328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.742451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.963 [2024-07-26 02:09:23.742480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.963 qpair failed and we were unable to recover it. 00:33:41.963 [2024-07-26 02:09:23.742646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.742675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.742828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.742854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.743003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.743030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.743163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.743189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.743320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.743363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.743487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.743530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.743719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.743746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.743907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.743934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.744043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.744077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.744229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.744255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.744414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.744441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.744566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.744608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.744778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.744805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.744925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.744950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.745065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.745092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.745229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.745254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.745391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.745418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.745585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.745611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.745752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.745779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.745883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.745910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.746050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.746083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.746213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.746239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.746391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.746418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.746567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.746598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.746732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.746758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.746863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.746890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.747044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.747087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.747203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.747229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.747356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.747382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.747502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.747544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.747664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.747691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.747831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.747859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.748005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.748032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.748197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.748223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.748388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.748413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.748546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.748571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.748736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.748762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.748899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.748925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.749035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.749067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.749173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.749198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.749340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.749366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.749530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.749561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.749692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.749728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.964 [2024-07-26 02:09:23.749902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.964 [2024-07-26 02:09:23.749928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.964 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.750035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.750067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.750173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.750199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.750313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.750338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.750447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.750473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.750581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.750606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.750753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.750778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.750909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.750940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.751067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.751093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.751227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.751253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.751366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.751391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.751529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.751555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.751661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.751687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.751832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.751871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.752011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.752039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.752163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.752191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.752328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.752354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.752487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.752514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.752623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.752649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.752758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.752785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.752913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.752939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.753057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.753088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.753248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.753273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.753379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.753404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.753514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.753539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.753650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.753678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.753790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.753817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.753947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.753974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.754140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.754167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.754278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.754306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.754415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.754441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.754572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.754599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.754708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.754734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.754897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.754923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.755083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.755114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.755236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.755261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.755389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.755414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.755526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.755551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.755659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.755685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.755795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.755821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.755952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.755977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.756086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.756113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.756224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.756249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.965 [2024-07-26 02:09:23.756409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.965 [2024-07-26 02:09:23.756435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.965 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.756567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.756592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.756736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.756761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.756885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.756914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.757028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.757055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.757178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.757205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.757317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.757345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.757455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.757481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.757617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.757644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.757755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.757782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.757926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.757952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.758072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.758098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.758201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.758227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.758367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.758394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.758526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.758551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.758680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.758706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.758819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.758845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.758953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.758978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.759112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.759145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.759260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.759287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.759400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.759427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.759545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.759571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.759677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.759704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.759817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.759843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.759955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.759982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.760111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.760137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.760275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.760301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.760437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.760463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.760629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.760655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.760757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.760783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.760915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.760940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.761075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.761102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.761228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.761254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.761388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.761414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.761572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.761597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.761697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.761722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.761860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.761886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.761986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.762012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.762151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.762177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.762307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.762333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.762467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.762493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.966 [2024-07-26 02:09:23.762615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.966 [2024-07-26 02:09:23.762641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.966 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.762774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.762799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.762896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.762921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.763030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.763055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.763195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.763225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.763333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.763359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.763472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.763498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.763601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.763626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.763727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.763753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.763913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.763939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.764040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.764071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.764182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.764208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.764326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.764352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.764469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.764497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.764631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.764657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.764768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.764795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.764960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.764987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.765125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.765152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.765294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.765320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.765460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.765487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.765651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.765677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.765826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.765852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.765964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.765990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.766101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.766127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.766268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.766294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.766433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.766459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.766568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.766593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.766701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.766726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.766861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.766887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.767022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.767047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.767168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.767193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.767323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.767353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.767488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.767513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.767615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.767640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.767775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.767802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.767936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.767962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.768070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.768097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.768238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.768265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.768376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.768402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.768519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.768545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.768663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.768692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.768821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.768847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.768984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.769010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.769180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.769207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.769322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.769348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.769463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.967 [2024-07-26 02:09:23.769489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.967 qpair failed and we were unable to recover it. 00:33:41.967 [2024-07-26 02:09:23.769639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.769666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.769786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.769812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.769920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.769946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.770053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.770089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.770205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.770231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.770347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.770375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.770488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.770515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.770631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.770657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.770765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.770790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.770926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.770952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.771079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.771106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.771212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.771238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.771378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.771407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.771523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.771550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.771696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.771722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.771836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.771862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.772005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.772031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.772176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.772203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.772331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.772357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.772492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.772518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.772625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.772652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.772785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.772811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.772971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.772997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.773137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.773163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.773285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.773311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.773411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.773442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.773555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.773581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.773746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.773774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.773912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.773937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.774078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.774104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.774243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.774268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.774404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.774430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.774538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.774563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.774666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.774692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.774808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.774832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.774937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.774963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.775072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.775100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.775232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.775258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.775391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.775417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.775561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.775587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.775699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.775726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.775833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.775859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.776002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.776027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.968 [2024-07-26 02:09:23.776156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.968 [2024-07-26 02:09:23.776183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.968 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.776300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.776325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.776441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.776466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.776584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.776610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.776723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.776749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.776887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.776914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.777038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.777070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.777175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.777199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.777368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.777394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.777502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.777531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.777672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.777697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.777856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.777881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.777986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.778011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.778148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.778174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.778310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.778335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.778465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.778491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.778631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.778656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.778769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.778796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.778914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.778941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.779047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.779082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.779218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.779244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.779358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.779385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.779519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.779546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.779659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.779686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.779803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.779828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.779962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.779987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.780095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.780120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.780225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.780249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.780394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.780419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.780520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.780544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.780652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.780678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.780784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.780808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.780969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.780995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.781104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.781130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.781241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.781267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.781375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.781400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.781510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.781538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.781701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.969 [2024-07-26 02:09:23.781727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.969 qpair failed and we were unable to recover it. 00:33:41.969 [2024-07-26 02:09:23.781857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.781882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.781987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.782013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.782184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.782209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.782348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.782373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.782506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.782532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.782665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.782689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.782828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.782853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.782987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.783012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.783159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.783184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.783314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.783339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.783471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.783496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.783633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.783658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.783767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.783793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.783946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.783971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.784085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.784112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.784239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.784267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.784414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.784441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.784614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.784643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.784838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.784866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.784986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.785013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.785155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.785182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.785322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.785347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.785509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.785536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.785709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.785738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.785886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.785915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.786070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.786096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.786240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.786264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.786463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.786520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.786670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.786698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.786883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.786912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.787038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.787068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.787201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.787227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.787347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.787372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.787476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.787501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.787635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.787661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.787776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.787800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.787946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.787985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.788125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.788153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.788293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.788320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.788454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.788484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.788665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.788693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.788838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.788865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.789020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.789046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.970 qpair failed and we were unable to recover it. 00:33:41.970 [2024-07-26 02:09:23.789192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.970 [2024-07-26 02:09:23.789218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.789355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.789379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.789547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.789571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.789770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.789796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.789910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.789935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.790038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.790068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.790216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.790241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.790364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.790391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.790563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.790591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.790765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.790793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.790948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.790976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.791137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.791163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.791280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.791305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.791417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.791442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.791554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.791579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.791712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.791737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.791868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.791893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.792027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.792051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.792192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.792218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.792331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.792357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.792512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.792540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.792707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.792732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.792841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.792866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.793004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.793034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.793181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.793206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.793315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.793340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.793495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.793524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.793666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.793694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.793811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.793838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.793993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.794018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.794130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.794155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.794271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.794298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.794455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.794480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.794609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.794636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.794809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.794838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.794986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.795014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.795184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.795210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.795370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.795398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.795575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.795604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.795749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.795777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.795923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.795951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.796113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.796139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.796256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.796280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.796441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.796466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.796597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.796625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.796783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.796811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.796965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.796993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.797155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.797181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.797288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.797313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.797481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.797525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.797669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.797701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.797878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.797906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.798031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.798056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.798168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.798193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.798338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.798379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.798503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.971 [2024-07-26 02:09:23.798532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.971 qpair failed and we were unable to recover it. 00:33:41.971 [2024-07-26 02:09:23.798723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.798751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.798886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.798910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.799051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.799081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.799246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.799271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.799429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.799456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.799614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.799657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.799830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.799858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.800016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.800041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.800178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.800202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.800380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.800443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.800590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.800619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.800771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.800800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.800969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.800997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.801151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.801177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.801288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.801312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.801428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.801453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.801611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.801635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.801758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.801786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.801899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.801926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.802043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.802078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.802214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.802239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.802351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.802379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.802537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.802562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.802698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.802723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.802946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.802973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.803116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.803141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.803275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.803300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.803429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.803453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.803569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.803594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.803726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.803750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.803859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.803883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.804011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.804036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.804163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.804188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.804328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.804353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.804457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.804482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.804613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.804643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.804777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.804803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.804913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.804954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.805115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.805141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.805278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.805304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.805413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.805453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.805580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.805608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.805755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.805781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.805912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.805937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.806087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.806116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.806249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.806274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.806381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.806406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.806631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.806659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.806824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.806849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.807014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.807039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.807231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.807260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.807423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.807448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.807612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.807637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.807743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.807768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.807897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.807923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.808063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.972 [2024-07-26 02:09:23.808089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.972 qpair failed and we were unable to recover it. 00:33:41.972 [2024-07-26 02:09:23.808201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.808227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.808343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.808369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.808529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.808555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.808687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.808712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.808822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.808848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.808997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.809023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.809159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.809198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.809346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.809373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.809509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.809551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.809736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.809762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.809903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.809929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.810128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.810155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.810304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.810330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.810528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.810554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.810689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.810714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.810848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.810892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.811028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.811053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.811209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.811234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.811366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.811391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.811497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.811523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.811640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.811666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.811801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.811827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.811967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.811992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.812140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.812166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.812308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.812336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.812483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.812509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.812693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.812722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.812910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.812935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.813045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.813077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.813213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.813238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.813376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.813403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.813570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.813596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.813754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.813794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.813930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.813956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.814065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.814091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.814236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.814262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.814403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.814429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.814574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.814600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.814716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.814741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.814843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.814870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.815006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.815031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.815200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.815226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.815358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.815384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.815487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.815512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.815645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.815670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.815823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.815851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.816011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.816042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.816154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.816181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.816341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.816366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.816545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.816571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.973 [2024-07-26 02:09:23.816683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.973 [2024-07-26 02:09:23.816709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.973 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.816875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.816901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.817075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.817102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.817292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.817320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.817536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.817586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.817747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.817773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.817909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.817935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.818082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.818124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.818308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.818333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.818451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.818476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.818612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.818638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.818801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.818826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.818972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.818998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.819118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.819156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.819299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.819326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.819460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.819486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.819625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.819650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.819753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.819778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.819886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.819912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.820022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.820050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.820200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.820227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.820341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.820367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.820522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.820550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.820740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.820770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.820905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.820933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.821091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.821118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.821258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.821284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.821422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.821447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.821591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.821616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.821731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.821757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.821869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.821895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.822028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.822079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.822219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.822246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.822383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.822425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.822536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.822565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.822737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.822762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.822881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.822908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.823051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.823082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.823190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.823216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.823353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.823379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.823520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.823545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.823695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.823720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.823878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.823904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.824035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.824065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.824181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.824207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.824346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.824372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.824504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.824532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.824691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.824717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.824872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.824913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.825040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.825081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.825245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.825272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.825416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.825457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.825624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.825650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.825814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.825840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.825998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.826027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.974 qpair failed and we were unable to recover it. 00:33:41.974 [2024-07-26 02:09:23.826200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.974 [2024-07-26 02:09:23.826227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.826365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.826391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.826506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.826531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.826664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.826690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.826829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.826855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.826993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.827019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.827164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.827190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.827360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.827385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.827532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.827566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.827744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.827773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.827926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.827952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.828083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.828110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.828274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.828316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.828477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.828503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.828643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.828686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.828837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.828866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.829079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.829121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.829255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.829281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.829413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.829443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.829593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.829620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.829737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.829763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.829897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.829923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.830107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.830133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.830245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.830271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.830419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.830444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.830622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.830648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.830784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.830810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.830936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.830964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.831123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.831150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.831311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.831340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.831444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.831470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.831632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.831658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.831814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.831847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.832039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.832070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.832218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.832244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.832430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.832459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.832579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.832608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.832791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.832817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.832966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.832994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.833152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.833178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.833317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.833343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.833459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.833502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.833652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.833678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.833788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.833814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.833982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.834008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.834200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.834239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.834409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.834436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.834541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.834584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.834763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.834794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.834925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.834951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.835118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.835144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.835249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.835275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.835452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.835478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.835630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.835659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.835816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.835842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.835988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.975 [2024-07-26 02:09:23.836014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.975 qpair failed and we were unable to recover it. 00:33:41.975 [2024-07-26 02:09:23.836164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.836190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.836329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.836356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.836523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.836549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.836655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.836680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.836820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.836846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.836971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.836999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.837158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.837184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.837315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.837341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.837467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.837492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.837628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.837672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.837816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.837858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.837969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.837994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.838169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.838196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.838305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.838331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.838459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.838483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.838647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.838690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.838836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.838864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.839017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.839043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.839218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.839244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.839348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.839373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.839487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.839512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.839621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.839647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.839778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.839803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.839912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.839938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.840047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.840079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.840226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.840254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.840386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.840413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.840524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.840550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.840684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.840711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.840882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.840911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.841025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.841055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.841232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.841260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.841368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.841394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.841502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.841527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.841660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.841685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.841794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.841819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.841953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.841993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.842134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.842160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.842293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.842317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.842457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.842504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.842652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.842680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.842799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.842824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.842962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.842988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.843161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.843187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.843326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.843351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.843485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.843528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.843653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.843680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.843813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.843838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.843970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.843996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.844192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.844231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.844379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.844407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.976 qpair failed and we were unable to recover it. 00:33:41.976 [2024-07-26 02:09:23.844544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.976 [2024-07-26 02:09:23.844570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.844722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.844751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.844934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.844960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.845096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.845123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.845236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.845262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.845378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.845405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.845544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.845588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.845743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.845773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.845906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.845932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.846052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.846085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.846225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.846251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.846380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.846405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.846517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.846541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.846672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.846697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.846826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.846864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.847008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.847036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.847173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.847200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.847334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.847360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.847497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.847524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.847674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.847703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.847839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.847866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.847995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.848024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.848196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.848227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.848388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.848417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.848564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.848593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.848725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.848765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.848937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.848966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.849152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.849179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.849291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.849316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.849463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.849506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.849627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.849656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.849803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.849831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.849968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.850010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.850163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.850191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.850327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.850353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.850514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.850544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.850709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.850739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.850908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.850965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.851091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.851118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.851256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.851281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.851471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.851522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.851804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.851857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.852007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.852036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.852173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.852200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.852321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.852360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.852497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.852524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.852644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.852671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.852785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.852814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.852977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.853003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.853142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.853177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.853329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.853360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.853521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.853547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.853653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.853679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.853783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.853808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.853941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.853967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.977 qpair failed and we were unable to recover it. 00:33:41.977 [2024-07-26 02:09:23.854081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.977 [2024-07-26 02:09:23.854109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.854249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.854275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.854465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.854493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.854798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.854826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.854978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.855007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.855141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.855167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.855303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.855330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.855453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.855483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.855645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.855674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.855822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.855851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.855990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.856019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.856190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.856217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.856356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.856383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.856526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.856551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.856696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.856722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.856965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.856991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.857110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.857137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.857300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.857326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.857452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.857480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.857592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.857621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.857771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.857801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.857948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.857987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.858143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.858182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.858323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.858350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.858503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.858528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.858675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.858716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.858866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.858894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.859042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.859078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.859232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.859262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.859452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.859478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.859615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.859642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.859851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.859908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.860117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.860146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.860305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.860334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.860471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.860514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.860674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.860718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.860853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.860879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.861036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.861081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.861218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.861248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.861417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.861460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.861604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.861631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.861739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.861764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.861896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.861921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.862053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.862085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.862187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.862212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.862347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.862373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.862506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.862531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.862687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.862715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.862883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.862909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.863020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.863048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.863200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.863227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.863362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.863406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.863517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.863543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.863676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.863701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.863858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.863884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.864020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.978 [2024-07-26 02:09:23.864047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.978 qpair failed and we were unable to recover it. 00:33:41.978 [2024-07-26 02:09:23.864171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.864197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.864338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.864362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.864469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.864495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.864652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.864677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.864830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.864858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.865015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.865045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.865194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.865220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.865385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.865427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.865560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.865607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.865737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.865766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.865915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.865944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.866103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.866131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.866266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.866292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.866458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.866487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.866617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.866659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.866811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.866839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.866958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.866987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.867114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.867140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.867273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.867298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.867408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.867450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.867564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.867591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.867750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.867779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.867905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.867933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.868073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.868112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.868278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.868323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.868455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.868499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.868654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.868696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.868834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.868861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.868995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.869021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.869215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.869246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.869366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.869394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.869537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.869566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.869711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.869743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.869868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.869895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.870057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.870094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.870291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.870319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.870463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.870490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.870625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.870669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.870835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.870861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.870997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.871023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.871156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.871200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.871365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.871395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.871570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.871598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.871710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.871737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.871938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.871966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.872135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.872162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.872276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.979 [2024-07-26 02:09:23.872301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.979 qpair failed and we were unable to recover it. 00:33:41.979 [2024-07-26 02:09:23.872443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.872468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.872604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.872633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.872777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.872805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.872976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.873004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.873162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.873188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.873295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.873320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.873460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.873507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.873685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.873713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.873833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.873861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.873998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.874023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.874145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.874171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.874279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.874304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.874463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.874493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.874603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.874630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.874775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.874814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.874990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.875029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.875179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.875208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.875347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.875375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.875530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.875574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.875763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.875807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.875920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.875946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.876067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.876106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.876251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.876277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.876406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.876435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.876571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.876614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.876802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.876827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.876951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.876979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.877090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.877116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.877225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.877250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.877400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.877428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.877648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.877713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.877856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.877883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.878035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.878076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.878292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.878330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.878504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.878549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.878708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.878751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.878892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.878920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.879096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.879124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.879276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.879322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.879481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.879530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.879662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.879688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.879832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.879858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.879991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.880017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.880185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.880231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.880426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.880456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.880695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.880746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.880867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.880895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.881048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.881080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.881218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.881244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.881396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.881423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.881675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.881727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.881958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.882010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.882200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.882226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.882347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.882372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.882503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.882544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.980 qpair failed and we were unable to recover it. 00:33:41.980 [2024-07-26 02:09:23.882691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.980 [2024-07-26 02:09:23.882719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.882895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.882922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.883031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.883073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.883229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.883254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.883397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.883422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.883537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.883579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.883782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.883811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.883955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.883983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.884142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.884168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.884332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.884376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.884496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.884536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.884776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.884836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.884963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.884991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.885171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.885210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.885345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.885383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.885507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.885551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.885712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.885753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.886109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.886136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.886282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.886308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.886472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.886516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.886709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.886737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.886884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.886913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.887071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.887115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.887230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.887255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.887418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.887443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.887618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.887661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.887894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.887957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.888110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.888137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.888268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.888293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.888508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.888573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.888694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.888721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.888872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.888900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.889027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.889051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.889179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.889203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.889382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.889410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.889638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.889687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.889835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.889864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.890016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.890045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.890230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.890275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.890438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.890482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.890607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.890653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.890816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.890842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.890949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.890976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.891118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.891145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.891277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.891303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.891439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.891466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.891577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.891604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.891714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.891741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.891878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.891904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.892083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.892142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.892272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.892302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.892417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.892445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.892572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.892601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.892829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.892861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.981 qpair failed and we were unable to recover it. 00:33:41.981 [2024-07-26 02:09:23.892985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.981 [2024-07-26 02:09:23.893013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.893219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.893245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.893493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.893542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.893715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.893743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.893869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.893897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.894042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.894084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.894263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.894289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.894416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.894442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.894579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.894622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.894783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.894811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.895037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.895070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.895249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.895287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.895468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.895513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.895686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.895734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.895887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.895915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.896039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.896073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.896207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.896233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.896337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.896364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.896521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.896547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.896709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.896738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.896914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.896942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.897116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.897143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.897282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.897311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.897466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.897497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.897648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.897677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.897855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.897923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.898055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.898086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.898269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.898295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.898485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.898513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.898658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.898687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.898832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.898860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.899011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.899037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.899159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.899185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.899320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.899363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.899538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.899567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.899720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.899748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.899863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.899892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.900009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.900034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.900152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.900178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.900289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.900315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.900461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.900487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.900640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.900668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.900872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.900901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.901024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.901049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.901193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.901219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.901375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.901403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.901552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.901582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.901734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.982 [2024-07-26 02:09:23.901763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.982 qpair failed and we were unable to recover it. 00:33:41.982 [2024-07-26 02:09:23.901915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.901943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.902079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.902112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.902249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.902275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.902406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.902436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.902566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.902595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.902737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.902766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.902941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.902969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.903154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.903181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.903295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.903321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.903476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.903502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.903624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.903653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.903798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.903826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.903945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.903973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.904096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.904123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.904257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.904282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.904413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.904441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.904567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.904609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.904760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.904789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.904938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.904967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.905114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.905140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.905271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.905296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.905485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.905514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.905647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.905688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.905817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.905845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.905960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.905988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.906151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.906178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.906281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.906307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.906468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.906496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.906666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.906695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.906818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.906846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.907004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.907032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.907175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.907201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.907362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.907387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.983 qpair failed and we were unable to recover it. 00:33:41.983 [2024-07-26 02:09:23.907576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.983 [2024-07-26 02:09:23.907604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.907737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.907765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.907950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.907979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.908112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.908138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.908238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.908264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.908366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.908393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.908532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.908558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.908683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.908725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.908853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.908881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.909036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.909069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.909222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.909268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.909444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.909472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.909580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.909609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.909761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.909789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.909944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.909973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.910112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.910139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.910289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.910318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.910434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.910462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.910635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.910664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.910776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.910804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.910992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.911017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.911177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.911204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.911313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.911338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.911486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.911514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.911666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.911695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.911813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.911842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.911959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.911988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.912147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.912174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.912336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.912362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.912539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.912568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.912720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.912749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.912883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.912927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.913090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.913118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.913246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.913272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.913380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.913406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.913547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.984 [2024-07-26 02:09:23.913573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.984 qpair failed and we were unable to recover it. 00:33:41.984 [2024-07-26 02:09:23.913708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.913735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.913847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.913873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.914008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.914034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.914225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.914265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.914386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.914414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.914576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.914603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.914797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.914824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.914972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.914998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.915124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.915160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.915296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.915330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.915472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.915498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.915602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.915628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.915841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.915870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.916017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.916047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.916223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.916266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.916416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.916443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.916555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.916581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.916688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.916714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.916902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.916931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.917077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.917115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.917223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.917249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.917387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.917413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.917548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.917574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.917682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.917708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.917818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.917846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.917960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.917986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.918156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.918184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.918305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.918348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.918486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.918513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.918682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.918727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.918883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.918909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.919044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.919075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.919198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.919225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.919331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.919357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.919468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.919494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.919624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.919650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.919780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.919805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.919939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.919965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.985 [2024-07-26 02:09:23.920087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.985 [2024-07-26 02:09:23.920114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.985 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.920249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.920274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.920440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.920466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.920568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.920601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.920734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.920760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.920864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.920890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.921003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.921029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.921153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.921179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.921310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.921341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.921480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.921506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.921641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.921667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.921780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.921806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.921939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.921969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.922080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.922107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.922243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.922269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.922381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.922407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.922540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.922566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.922710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.922736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.922864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.922889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.923014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.923043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.923185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.923210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.923346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.923372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.923505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.923534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.923662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.923687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.923792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.923818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.923931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.923956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.924072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.924099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.924242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.924267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.924440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.924464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.924599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.924624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.924763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.924811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.924947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.924991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.925176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.925204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.925371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.925398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.925533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.925563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.925725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.925751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.925891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.925917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.926051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.926085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.926201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.926226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.986 [2024-07-26 02:09:23.926333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.986 [2024-07-26 02:09:23.926358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.986 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.926493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.926518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.926616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.926641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.926777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.926803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.926992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.927024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.927196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.927224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.927390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.927420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.927583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.927612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.927796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.927823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.927935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.927980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.928102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.928146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.928282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.928308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.928442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.928483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.928602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.928629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.928760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.928785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.928888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.928912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.929036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.929073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:41.987 qpair failed and we were unable to recover it. 00:33:41.987 [2024-07-26 02:09:23.929216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.987 [2024-07-26 02:09:23.929242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.929360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.929393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.929507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.929534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.929691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.929717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.929875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.929905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.930026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.930057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.930200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.930225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.930353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.930392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.930578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.930625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.930791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.930818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.930959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.930985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.931123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.931150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.931261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.931287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.931424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.931470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.931584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.931614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.931779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.931805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.931943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.931969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.932143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.932184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.932329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.932357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.932489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.932515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.932678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.932708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.932883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.932913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.933068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.933113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.933251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.933277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.933428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.933455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.933563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.933589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.933701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.933727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.933834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.933878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.934056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.934103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.934281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.934307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.934416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.934442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.934584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-26 02:09:23.934611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-26 02:09:23.934715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.934741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.934878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.934904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.935012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.935038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.935165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.935191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.935294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.935320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.935424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.935449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.935551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.935577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.935724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.935762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.935904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.935932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.936071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.936100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.936222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.936249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.936407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.936436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.936571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.936597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.936735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.936761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.936870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.936896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.937031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.937057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.937290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.937317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.937437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.937465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.937604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.937650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.937831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.937859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.937988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.938013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.938149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.938188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.938309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.938337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.938515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.938564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.938703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.938748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.938869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.938897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.939054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.939091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.939201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.939228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.939345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.939371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.939505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.939537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.939686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.939733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.939883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.939915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.940092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.940132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-26 02:09:23.940270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-26 02:09:23.940309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.940453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.940480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.940618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.940644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.940750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.940780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.940891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.940917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.941021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.941048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.941201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.941240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.941353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.941380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.941522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.941548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.941673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.941717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.941864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.941890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.942020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.942046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.942206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.942245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.942389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.942419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.942594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.942642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.942808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.942853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.942997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.943025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.943188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.943228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.943413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.943471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.943636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.943683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.943848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.943896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.944003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.944029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.944193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.944238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.944368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.944398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.944587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.944613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.944716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.944742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.944878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.944904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.945017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.945042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.945177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.945202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.945312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.945337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.945469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.945501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.945622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.945648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.945787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.945815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.945967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-26 02:09:23.945994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-26 02:09:23.946152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.946178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.946317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.946341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.946474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.946499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.946634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.946661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.946794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.946822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.946999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.947028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.947169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.947195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.947324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.947352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.947498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.947526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.947672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.947701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.947879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.947908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.948069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.948094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.948206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.948232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.948420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.948448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.948619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.948647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.948800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.948828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.949005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.949034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.949165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.949193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.949373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.949406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.949621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.949667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.949880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.949926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.950077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.950105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.950208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.950235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.950381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.950416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.950588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.950631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.950791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.950817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.950950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.950976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.951104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.951133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.951279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.951309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.951438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.951485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.951648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.951693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.951858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.951882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.951986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.952012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-26 02:09:23.952153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-26 02:09:23.952178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.952304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.952332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.952460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.952504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.952629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.952659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.952819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.952849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.953010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.953037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.953197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.953223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.953408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.953438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.953552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.953581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.953732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.953761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.953937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.953983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.954130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.954158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.954273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.954298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.954456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.954480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.954626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.954658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.954826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.954854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.955008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.955038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.955192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.955236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.955380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.955408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.955582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.955609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.955776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.955819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.955961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.955988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.956152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.956182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.956308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.956336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.956485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.956512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.956669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.956695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.956831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.956856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.956986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.957010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.957148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.957174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.957310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.957352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.957476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.957503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.957650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.957678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.957831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.957859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.958020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.958044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-26 02:09:23.958161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-26 02:09:23.958187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.958321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.958346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.958508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.958536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.958672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.958717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.958893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.958922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.959073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.959125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.959242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.959266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.959424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.959452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.959594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.959621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.959742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.959771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.959889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.959923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.960053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.960085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.960220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.960245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.960406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.960435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.960560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.960585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.960719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.960745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.960854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.960879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.960987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.961013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.961128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.961153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.961303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.961345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.961487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.961533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.961710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.961739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.961889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.961918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.962070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.962115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.962259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.962285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.962386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.962430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.962591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.962617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.962755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.962781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-26 02:09:23.962939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-26 02:09:23.962968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.963101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.963127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.963295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.963330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.963485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.963534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.963685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.963713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.963863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.963892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.964020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.964044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.964184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.964210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.964337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.964376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.964550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.964604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.964770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.964815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.964924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.964950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.965082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.965109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.965294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.965338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.965474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.965520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.965709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.965754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.965866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.965892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.966031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.966057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.966206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.966251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.966424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.966451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.966564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.966590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.966729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.966756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.966918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.966945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.967090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.967117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.967256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.967282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.967392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.967417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.967572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.967601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.967766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.967812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.967958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.967983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.968116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.968142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.968303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.968328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.968440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.968465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.968616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.968644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.968780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.968822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-26 02:09:23.968984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-26 02:09:23.969008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.969120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.969145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.969305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.969360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.969518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.969549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.969682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.969728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.969908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.969937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.970069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.970114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.970254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.970281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.970448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.970492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.970620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.970649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.970821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.970850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.970991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.971018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.971144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.971169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.971305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.971331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.971464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.971489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.971591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.971616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.971724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.971749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.971926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.971956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.972116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.972142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.972252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.972278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.972412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.972454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.972658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.972687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.972925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.972954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.973118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.973145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.973275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.973300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.973407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.973432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.973571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.973597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.973727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.973752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.973913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.973969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.974119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.974148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.974313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.974358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.974519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.974562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.974691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.974734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.974856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-26 02:09:23.974882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-26 02:09:23.975041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.975078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.975222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.975248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.975382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.975407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.975582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.975613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.975765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.975810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.975958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.975987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.976173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.976213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.976390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.976433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.976603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.976639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.976806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.976849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.976981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.977007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.977137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.977164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.977272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.977299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.977434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.977462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.977605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.977633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.977777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.977805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.977920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.977948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.978075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.978101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.978213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.978238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.978388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.978417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.978559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.978587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.978758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.978790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.978920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.978949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.979118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.979145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.979260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.979287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.979390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.979416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.979578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.979622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.979771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.979801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.979949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.979978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.980135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.980161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.980297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.980323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.980428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.980454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.980566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.980591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.980728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-26 02:09:23.980754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-26 02:09:23.980884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.980909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.981014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.981044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.981192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.981218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.981354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.981380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.981487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.981513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.981649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.981678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.981783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.981809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.981975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.982005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.982132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.982160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.982297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.982323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.982505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.982533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.982656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.982687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.982850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.982876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.983014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.983040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.983217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.983243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.983359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.983385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.983517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.983542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.983679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.983705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.983840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.983866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.984029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.984055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.984207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.984233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.984347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.984372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.984474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.984499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.984644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.984669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.984804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.984830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.984940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.984966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.985091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.985131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.985314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.985353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.985495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.985545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.985707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.985750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.985856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.985883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.986050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.986083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.986217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.986261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-26 02:09:23.986420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-26 02:09:23.986463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.986594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.986637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.986788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.986815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.986956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.986985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.987145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.987184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.987330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.987361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.987544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.987574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.987774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.987820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.987948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.987975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.988138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.988186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.988329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.988356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.988496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.988522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.988681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.988707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.988842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.988867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.988971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.988996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.989111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.989137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.989284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.989309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.989422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.989447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.989607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.989632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.989762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.989788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.989947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.989973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.990116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.990143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.990278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.990309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.990459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.990485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.990617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.990642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.990780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.990806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.990947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.990973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.991080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.991110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.991230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.991260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.991418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-26 02:09:23.991448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-26 02:09:23.991583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.991626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.991826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.991873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.992032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.992063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.992179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.992206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.992312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.992358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.992530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.992559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.992713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.992742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.992908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.992934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.993077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.993123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.993276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.993314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.993481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.993530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.993665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.993713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.993846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.993888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.994036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.994070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.994228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.994255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.994387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.994413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.994548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.994573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.994732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.994761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.994977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.995005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.995193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.995232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.995357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.995384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.995517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.995543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.995653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.995678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.995788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.995816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.995978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.996004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.996178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.996206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.996363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.996393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.996563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.996588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.996730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.996758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.996903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.996931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.997074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.997123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.997234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.997259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.997418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.997446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-26 02:09:23.997586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-26 02:09:23.997629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:23.997771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:23.997800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:23.997929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:23.997955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:23.998100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:23.998127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:23.998238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:23.998263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:23.998423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:23.998449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:23.998615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:23.998643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:23.998795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:23.998823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:23.998950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:23.998991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:23.999101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:23.999127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:23.999240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:23.999265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:23.999417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:23.999446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:23.999612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:23.999658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:23.999786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:23.999814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:23.999961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:23.999991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.000131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.000158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.000262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.000288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.000415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.000444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.000606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.000632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.000791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.000819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.001009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.001053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.001204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.001231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.001354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.001410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.001584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.001619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.001763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.001810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.001990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.002016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.002173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.002206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.002334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.002363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.002553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.002580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.002717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.002760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.002903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.002932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.003082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.003126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.003295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.003321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.003463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-26 02:09:24.003509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-26 02:09:24.003658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.003687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.003824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.003866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.004047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.004081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.004234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.004260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.004423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.004456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.004610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.004656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.004836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.004862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.004999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.005025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.005158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.005186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.005297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.005324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.005503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.005536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.005733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.005780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.005897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.005926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.006075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.006128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.006285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.006311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.006442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.006468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.006606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.006635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.006813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.006842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.006965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.006994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.007163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.007195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.007316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.007342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.007473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.007502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.007665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.007690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.007799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.007825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.007987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.008016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.008182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.008209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.008314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.008340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.008493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.008535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.008721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.008767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.008887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.008915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.009040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.009071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.009209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.009235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.009409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-26 02:09:24.009460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-26 02:09:24.009579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.009608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.009741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.009785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.009948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.009973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.010136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.010174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.010293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.010320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.010482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.010509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.010654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.010681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.010801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.010828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.011012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.011037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.011154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.011182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.011337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.011376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.011521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.011550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.011715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.011744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.011894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.011923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.012050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.012085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.012197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.012224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.012349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.012378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.012513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.012557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.012701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.012731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.012847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.012878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.013064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.013091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.013233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.013259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.013420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.013449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.013594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.013623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.013735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.013763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.013938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.013965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.014089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.014129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.014282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.014310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.014468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.014513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.014667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.014711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.014910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.014954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.015118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.015157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.015338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.015397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-26 02:09:24.015549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-26 02:09:24.015580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.015755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.015785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.015941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.015968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.016134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.016161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.016294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.016320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.016427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.016453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.016607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.016638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.016802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.016831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.017005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.017034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.017250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.017290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.017428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.017457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.017600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.017625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.017728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.017753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.017924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.017964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.018111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.018139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.018280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.018307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.018416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.018442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.018617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.018646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.018816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.018864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.018978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.019007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.019214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.019240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.019374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.019405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.019577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.019624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.019786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.019845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.019989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.020017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.020136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.020163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.020357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.020383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.020524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.020550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.020713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.020759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.020893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-26 02:09:24.020919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-26 02:09:24.021066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.021097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.021245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.021292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.021478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.021521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.021678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.021707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.021854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.021893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.022041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.022078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.022237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.022265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.022414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.022442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.022609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.022656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.022824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.022870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.022997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.023023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.023132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.023158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.023310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.023339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.023465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.023491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.023659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.023684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.023821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.023846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.023962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.023986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.024156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.024181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.024297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.024339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.024492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.024519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.024652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.024698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.024869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.024897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.025027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.025052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.025200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.025225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.025332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.025358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.025464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.025489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.025699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.025724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.025860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.025886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.025994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.026019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.026178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.026218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.026391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.026428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.026588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.026619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.026796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.026825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.026992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-26 02:09:24.027035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-26 02:09:24.027225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.027254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.027422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.027465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.027637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.027683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.027836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.027868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.028050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.028088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.028249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.028276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.028406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.028431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.028618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.028664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.028801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.028826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.028958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.028983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.029122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.029161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.029311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.029338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.029472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.029498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.029627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.029676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.029838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.029864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.030002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.030028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.030174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.030212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.030356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.030398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.030596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.030627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.030799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.030846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.031002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.031032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.031197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.031224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.031360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.031386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.031564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.031597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.031793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.031826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.032007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.032035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.032184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.032210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.032344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.032370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.032511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.032537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.032692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.032738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.032899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.032926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.033067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.033093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.033234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.033259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.293 [2024-07-26 02:09:24.033381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.293 [2024-07-26 02:09:24.033407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.293 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.033516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.033541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.033761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.033808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.033957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.033993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.034157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.034184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.034325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.034351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.034484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.034510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.034623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.034648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.034770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.034798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.034948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.034974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.035111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.035138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.035276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.035302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.035452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.035477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.035663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.035691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.035806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.035836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.036053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.036087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.036225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.036251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.036365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.036391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.036532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.036557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.036729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.036776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.036902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.036930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.037069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.037113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.037222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.037249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.037438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.037466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.037611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.037639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.037757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.037786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.037913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.037943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.038093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.038120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.038275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.038300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.038438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.038463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.038634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.038660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.038820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.038845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.039035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.039073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.039179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.039204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.294 qpair failed and we were unable to recover it. 00:33:42.294 [2024-07-26 02:09:24.039310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.294 [2024-07-26 02:09:24.039336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.039488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.039516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.039691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.039719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.039888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.039917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.040040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.040076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.040229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.040256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.040361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.040387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.040582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.040610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.040728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.040756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.040885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.040919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.041120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.041146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.041305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.041331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.041439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.041482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.041633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.041661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.041786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.041811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.041945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.041970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.042131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.042170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.042281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.042307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.042447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.042472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.042655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.042688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.042907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.042953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.043090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.043116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.043257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.043282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.043397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.043423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.043532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.043556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.043700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.043730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.043914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.043940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.044110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.044154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.044317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.044348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.044503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.044530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.044645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.044673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.044811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.044837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.045031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.045057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.045174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.295 [2024-07-26 02:09:24.045200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.295 qpair failed and we were unable to recover it. 00:33:42.295 [2024-07-26 02:09:24.045310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.045337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.045476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.045503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.045646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.045678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.045819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.045846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.046006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.046032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.046206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.046234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.046386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.046415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.046575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.046601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.046736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.046779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.046923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.046952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.047116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.047142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.047299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.047328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.047479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.047507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.047694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.047719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.047859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.047904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.048088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.048115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.048257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.048284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.048420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.048446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.048614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.048662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.048817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.048843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.048980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.049025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.049206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.049245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.049356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.049383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.049522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.049562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.049741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.049793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.049951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.049977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.050089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.050114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.050247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.050272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.050376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.050401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.296 [2024-07-26 02:09:24.050520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.296 [2024-07-26 02:09:24.050545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.296 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.050659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.050685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.050798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.050823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.050957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.050982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.051141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.051170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.051299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.051325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.051488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.051514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.051658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.051686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.051865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.051890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.052012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.052054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.052195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.052221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.052355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.052382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.052519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.052544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.052645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.052670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.052810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.052835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.052954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.052992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.053162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.053193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.053346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.053372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.053479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.053505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.053662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.053691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.053876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.053902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.054007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.054050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.054211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.054237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.054346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.054372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.054512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.054553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.054697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.054726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.054888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.054914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.055141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.055168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.055338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.055367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.055492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.055517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.055679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.055720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.055868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.055897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.056065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.056091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.056228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.297 [2024-07-26 02:09:24.056254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.297 qpair failed and we were unable to recover it. 00:33:42.297 [2024-07-26 02:09:24.056406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.056434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.056559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.056585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.056718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.056760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.056885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.056914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.057077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.057104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.057231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.057257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.057412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.057445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.057602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.057628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.057745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.057770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.057923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.057978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.058123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.058153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.058264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.058290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.058473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.058503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.058668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.058695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.058801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.058827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.058938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.058967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.059077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.059103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.059268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.059294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.059504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.059552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.059709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.059735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.059853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.059878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.060056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.060086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.060252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.060277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.060381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.060407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.060569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.060595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.060742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.060768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.060872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.060899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.061046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.061108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.061233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.061263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.061447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.061477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.061632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.061662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.061848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.061875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.062024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.062053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.062239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.298 [2024-07-26 02:09:24.062267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.298 qpair failed and we were unable to recover it. 00:33:42.298 [2024-07-26 02:09:24.062377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.062403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.062547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.062588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.062735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.062763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.062892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.062917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.063076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.063117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.063291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.063319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.063437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.063463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.063604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.063645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.063759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.063787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.063950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.063976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.064139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.064183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.064317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.064364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.064468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.064502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.064669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.064711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.064834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.064864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.065011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.065040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.065186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.065213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.065334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.065360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.065468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.065494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.065629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.065654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.065787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.065813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.065922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.065947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.066084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.066125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.066262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.066305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.066445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.066472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.066617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.066643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.066759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.066786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.066922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.066948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.067048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.067081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.067217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.067242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.067370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.067396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.067508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.067533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.067679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.067709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.299 [2024-07-26 02:09:24.067868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.299 [2024-07-26 02:09:24.067894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.299 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.068027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.068053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.068219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.068246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.068354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.068379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.068487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.068513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.068622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.068648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.068789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.068818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.068954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.068980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.069105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.069131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.069290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.069315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.069421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.069447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.069578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.069604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.069765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.069791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.069950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.069978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.070145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.070172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.070312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.070338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.070557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.070583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.070860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.070914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.071080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.071106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.071247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.071288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.071458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.071484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.071625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.071651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.071764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.071807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.071967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.071992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.072104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.072131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.072268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.072294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.072460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.072488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.072614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.072639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.072780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.072806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.072917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.072943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.073088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.073114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.073267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.073295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.073441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.073469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.073623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.073649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.300 qpair failed and we were unable to recover it. 00:33:42.300 [2024-07-26 02:09:24.073789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.300 [2024-07-26 02:09:24.073815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.073965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.073992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.074129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.074156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.074271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.074296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.074423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.074451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.074612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.074638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.074798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.074840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.075003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.075046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.075187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.075214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.075318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.075344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.075481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.075507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.075637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.075663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.075761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.075792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.075960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.075988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.076115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.076142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.076275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.076301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.076502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.076528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.076670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.076695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.076827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.076871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.077067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.077126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.077266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.077292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.077435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.077461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.077601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.077643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.077805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.077831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.077950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.077977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.078138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.078168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.078315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.078341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.078505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.078531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.078716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.301 [2024-07-26 02:09:24.078742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.301 qpair failed and we were unable to recover it. 00:33:42.301 [2024-07-26 02:09:24.078869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.078898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.079070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.079114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.079214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.079240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.079372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.079399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.079514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.079556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.079698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.079763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.079887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.079913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.080076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.080120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.080246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.080274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.080437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.080462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.080567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.080597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.080770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.080799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.080952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.080977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.081141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.081184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.081323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.081350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.081489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.081515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.081634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.081676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.081793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.081822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.081947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.081972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.082108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.082134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.082300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.082325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.082464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.082489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.082598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.082623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.082757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.082784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.082939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.082965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.083138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.083182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.083311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.083342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.083484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.083511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.083631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.083659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.083818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.083847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.083998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.084025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.084165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.084192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.084328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.084355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.084502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.302 [2024-07-26 02:09:24.084529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.302 qpair failed and we were unable to recover it. 00:33:42.302 [2024-07-26 02:09:24.084634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.084661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.084834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.084860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.085114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.085141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.085280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.085306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.085494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.085524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.085655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.085682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.085824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.085853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.086040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.086074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.086235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.086260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.086488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.086538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.086705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.086757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.086907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.086933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.087045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.087077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.087261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.087299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.087497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.087524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.087634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.087660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.087794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.087824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.087943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.087971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.088149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.088178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.088329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.088358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.088507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.088532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.088669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.088694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.088890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.088916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.089022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.089047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.089183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.089221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.089400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.089443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.089635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.089663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.089817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.089846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.089997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.090027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.090190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.090218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.090332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.090359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.090501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.090527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.090634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.090660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.090819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.303 [2024-07-26 02:09:24.090862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.303 qpair failed and we were unable to recover it. 00:33:42.303 [2024-07-26 02:09:24.091017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.091049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.091214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.091241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.091347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.091374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.091513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.091542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.091728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.091753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.091938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.091966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.092113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.092142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.092272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.092297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.092403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.092429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.092568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.092598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.092772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.092798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.092914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.092939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.093051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.093082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.093215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.093241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.093393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.093432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.093571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.093599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.093709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.093736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.093876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.093903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.094050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.094086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.094247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.094274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.094411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.094437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.094541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.094567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.094728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.094754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.094877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.094907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.095064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.095109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.095247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.095273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.095422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.095452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.095569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.095597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.095774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.095800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.095935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.095980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.096127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.096157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.096286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.096312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.096454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.096481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.304 [2024-07-26 02:09:24.096654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.304 [2024-07-26 02:09:24.096681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.304 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.096819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.096845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.096965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.097008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.097180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.097207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.097346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.097372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.097508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.097534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.097688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.097716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.097902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.097927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.098096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.098129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.098269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.098312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.098455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.098482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.098617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.098643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.098859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.098923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.099051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.099083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.099223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.099249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.099381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.099407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.099604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.099635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.099792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.099823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.099946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.099975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.100124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.100151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.100263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.100290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.100440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.100466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.100630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.100656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.100816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.100845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.100985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.101014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.101147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.101175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.101313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.101339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.101499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.101529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.101655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.101682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.101847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.101889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.102045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.102090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.102266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.102293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.102413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.102441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.102575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.102602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.305 [2024-07-26 02:09:24.102798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.305 [2024-07-26 02:09:24.102824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.305 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.103013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.103042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.103198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.103236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.103374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.103401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.103507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.103549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.103777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.103825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.103983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.104008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.104188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.104227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.104392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.104423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.104555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.104582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.104749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.104792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.104941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.104969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.105128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.105155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.105286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.105313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.105439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.105468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.105625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.105651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.105830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.105859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.106015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.106041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.106181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.106207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.106389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.106418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.106573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.106599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.106742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.106768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.106952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.106986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.107141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.107170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.107337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.107364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.107497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.107524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.107631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.107657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.107793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.107820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.107929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.107955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.108113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.108152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.108262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.108289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.108426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.306 [2024-07-26 02:09:24.108451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.306 qpair failed and we were unable to recover it. 00:33:42.306 [2024-07-26 02:09:24.108617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.108643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.108783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.108809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.108968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.108993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.109135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.109160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.109303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.109328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.109437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.109462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.109597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.109622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.109759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.109784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.109885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.109910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.110040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.110077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.110215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.110242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.110378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.110404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.110565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.110591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.110700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.110726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.110864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.110906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.111054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.111088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.111252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.111278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.111430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.111474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.111589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.111616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.111790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.111816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.111953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.111996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.112143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.112172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.112305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.112331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.112435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.112460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.112589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.112617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.112780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.112805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.112932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.112959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.113090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.113135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.113282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.113308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.307 [2024-07-26 02:09:24.113463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.307 [2024-07-26 02:09:24.113492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.307 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.113757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.113812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.113945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.113972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.114112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.114139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.114308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.114337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.114498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.114524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.114685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.114728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.114853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.114882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.115038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.115070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.115212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.115238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.115401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.115431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.115589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.115616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.115780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.115822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.115971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.116000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.116154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.116181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.116384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.116415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.116581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.116607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.116708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.116733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.116869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.116896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.117032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.117063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.117222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.117248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.117422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.117448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.117556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.117582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.117686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.117711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.117853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.117895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.118039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.118073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.118224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.118250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.118395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.118440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.118627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.118661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.118815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.118842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.118958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.118985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.119165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.119191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.119328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.119355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.119520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.308 [2024-07-26 02:09:24.119562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.308 qpair failed and we were unable to recover it. 00:33:42.308 [2024-07-26 02:09:24.119790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.119842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.119964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.119991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.120141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.120168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.120275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.120301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.120473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.120499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.120641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.120683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.120835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.120861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.120998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.121024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.121152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.121181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.121294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.121320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.121486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.121512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.121623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.121649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.121938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.121993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.122174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.122200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.122315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.122342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.122474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.122503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.122686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.122712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.122865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.122896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.123044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.123080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.123236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.123263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.123380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.123406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.123580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.123610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.123764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.123790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.123928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.123971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.124092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.124136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.124271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.124297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.124413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.124440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.124574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.124600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.124710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.124735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.124871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.124897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.125038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.125086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.125244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.125270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.125411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.125437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.309 [2024-07-26 02:09:24.125571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.309 [2024-07-26 02:09:24.125597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.309 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.125706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.125737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.125875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.125903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.126029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.126079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.126248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.126275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.126406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.126436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.126559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.126589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.126719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.126746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.126890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.126916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.127049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.127083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.127195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.127222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.127362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.127389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.127504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.127532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.127674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.127704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.127834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.127861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.127967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.127994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.128126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.128169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.128309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.128336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.128447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.128474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.128612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.128639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.128847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.128876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.129082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.129135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.129278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.129305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.129442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.129472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.129606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.129633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.129775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.129802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.129929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.129958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.130121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.130148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.130289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.130315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.130513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.130540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.130704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.130730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.130892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.130925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.131078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.131113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.131266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.310 [2024-07-26 02:09:24.131291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.310 qpair failed and we were unable to recover it. 00:33:42.310 [2024-07-26 02:09:24.131401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.131428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.131588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.131613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.131817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.131843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.131958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.132001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.132148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.132176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.132312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.132338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.132472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.132517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.132689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.132752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.132884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.132911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.133077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.133104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.133226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.133252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.133373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.133399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.133533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.133559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.133697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.133723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.133831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.133857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.133966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.134011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.134166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.134205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.134317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.134342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.134530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.134594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.134777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.134802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.134915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.134941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.135084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.135112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.135265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.135291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.135402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.135429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.135591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.135616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.135827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.135887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.136014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.136055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.136220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.136247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.136404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.136431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.136558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.136583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.136713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.136739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.136863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.136890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.137073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.137098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.311 [2024-07-26 02:09:24.137234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.311 [2024-07-26 02:09:24.137260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.311 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.137388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.137420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.137558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.137583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.137725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.137750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.137858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.137882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.138028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.138056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.138194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.138219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.138334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.138358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.138525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.138551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.138692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.138716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.138846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.138871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.139039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.139087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.139231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.139257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.139371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.139396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.139536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.139561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.139746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.139773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.139916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.139945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.140077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.140103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.140264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.140288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.140407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.140435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.140621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.140646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.140778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.140802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.140902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.140927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.141055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.141104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.141217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.141241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.141396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.141423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.141574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.141598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.141736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.141761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.141953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.141985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.142161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.142186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.312 [2024-07-26 02:09:24.142322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.312 [2024-07-26 02:09:24.142346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.312 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.142498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.142526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.142681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.142705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.142817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.142842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.143011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.143039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.143199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.143224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.143361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.143386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.143527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.143552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.143744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.143770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.143870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.143895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.144069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.144098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.144245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.144270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.144384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.144408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.144538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.144566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.144694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.144719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.144833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.144859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.145026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.145055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.145190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.145215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.145377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.145403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.145537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.145565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.145728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.145753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.145889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.145932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.146103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.146129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.146267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.146292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.146451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.146479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.146633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.146661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.146791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.146818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.146944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.146971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.147104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.147130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.147266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.147292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.147405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.147444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.147551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.147579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.147725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.147752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.147873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.147900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.313 qpair failed and we were unable to recover it. 00:33:42.313 [2024-07-26 02:09:24.148020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.313 [2024-07-26 02:09:24.148047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.148193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.148218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.148371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.148412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.148587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.148615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.148796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.148822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.148949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.148976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.149132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.149158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.149270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.149294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.149450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.149474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.149634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.149663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.149820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.149846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.149999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.150027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.150199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.150225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.150330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.150356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.150484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.150508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.150679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.150705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.150834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.150858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.150964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.150988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.151145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.151172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.151309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.151333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.151437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.151463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.151634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.151660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.151794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.151818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.151948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.151973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.152137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.152162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.152329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.152354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.152484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.152509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.152650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.152678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.152827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.152853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.152961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.152987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.153105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.153130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.153265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.153291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.153425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.153454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.153584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.153611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.314 [2024-07-26 02:09:24.153765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.314 [2024-07-26 02:09:24.153791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.314 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.153930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.153954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.154112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.154140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.154292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.154317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.154429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.154454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.154594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.154618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.154747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.154773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.154876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.154900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.155071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.155100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.155224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.155249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.155383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.155407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.155520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.155546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.155663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.155687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.155826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.155868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.156033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.156067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.156180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.156206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.156370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.156396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.156559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.156587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.156746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.156771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.156947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.156976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.157094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.157124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.157261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.157285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.157424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.157449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.157634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.157659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.157762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.157787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.157923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.157952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.158070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.158096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.158229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.158254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.158369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.158412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.158551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.158578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.158741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.158766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.158918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.158946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.159099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.159128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.159269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.159294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.315 [2024-07-26 02:09:24.159435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.315 [2024-07-26 02:09:24.159460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.315 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.159559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.159583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.159720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.159745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.159908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.159937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.160082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.160110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.160253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.160279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.160427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.160452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.160611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.160638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.160795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.160821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.160954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.160996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.161133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.161159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.161293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.161319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.161461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.161505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.161618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.161645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.161809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.161834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.161983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.162026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.162223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.162249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.162389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.162413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.162517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.162542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.162736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.162763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.162895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.162920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.163034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.163066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.163174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.163200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.163337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.163362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.163503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.163528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.163637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.163662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.164930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.164964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.165124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.165155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.165278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.165307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.165441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.165466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.165572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.165598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.165731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.165760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.165920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.165945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.166085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.166112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.316 [2024-07-26 02:09:24.166248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.316 [2024-07-26 02:09:24.166274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.316 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.166409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.166436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.166547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.166588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.166734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.166763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.166905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.166934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.167055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.167100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.167244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.167270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.167399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.167425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.167582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.167612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.167778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.167804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.167938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.167964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.168079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.168106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.168218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.168244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.168403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.168429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.168536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.168579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.168775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.168801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.168903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.168928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.169071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.169097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.169287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.169315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.169456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.169481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.169616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.169642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.169790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.169819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.169978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.170004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.170134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.170160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.170291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.170320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.170505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.170535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.170669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.170712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.170861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.170889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.171013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.171038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.171217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.171276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.171449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.171478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.317 qpair failed and we were unable to recover it. 00:33:42.317 [2024-07-26 02:09:24.171586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.317 [2024-07-26 02:09:24.171613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.171752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.171778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.171941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.171970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.172126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.172153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.172292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.172318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.172486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.172515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.172674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.172701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.172882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.172913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.173095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.173124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.173251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.173277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.173391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.173416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.173572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.173601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.173726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.173752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.173884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.173910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.174087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.174116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.174276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.174302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.174439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.174465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.174604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.174646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.174774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.174800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.174910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.174936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.175048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.175102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.175252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.175290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.175435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.175463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.175649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.175694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.175851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.175893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.176055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.176090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.176222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.176249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.176361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.176387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.176516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.176548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.176724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.176767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.176877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.176904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.177044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.177079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.177190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.177217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.177353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.177379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.177565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.177609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.177801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.177830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.318 [2024-07-26 02:09:24.177981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.318 [2024-07-26 02:09:24.178008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.318 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.178125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.178151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.178264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.178289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.178527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.178581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.178790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.178818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.178932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.178960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.179120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.179148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.179332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.179377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.179550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.179577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.179704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.179747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.179907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.179933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.180103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.180129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.180268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.180295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.180420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.180464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.180627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.180653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.180768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.180795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.180904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.180930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.181040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.181072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.181211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.181239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.181372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.181399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.181576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.181603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.181752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.181780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.181928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.181956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.182117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.182143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.182283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.182308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.182465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.182499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.182616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.182643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.182784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.182812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.182989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.183017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.183194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.183220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.183323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.183348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.183485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.183527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.183703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.183731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.183859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.183883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.184048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.184083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.184216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.184241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.184378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.319 [2024-07-26 02:09:24.184406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.319 qpair failed and we were unable to recover it. 00:33:42.319 [2024-07-26 02:09:24.184524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.184552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.184704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.184732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.184895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.184952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.185075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.185103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.185289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.185336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.185459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.185503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.185661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.185705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.185840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.185866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.185984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.186012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.186154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.186199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.186354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.186398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.186548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.186605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.186767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.186794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.186952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.186978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.187114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.187144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.187288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.187320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.187475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.187502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.187741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.187770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.187913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.187941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.188109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.188136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.188273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.188300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.188477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.188506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.188636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.188679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.188831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.188860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.188983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.189011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.189171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.189196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.189346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.189374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.189497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.189524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.189680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.189708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.189863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.189894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.190046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.190097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.190234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.190278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.190453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.190497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.190656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.190699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.190808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.190835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.190997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.191023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.191186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.320 [2024-07-26 02:09:24.191231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.320 qpair failed and we were unable to recover it. 00:33:42.320 [2024-07-26 02:09:24.191418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.191462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.191618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.191662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.191797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.191824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.191960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.191986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.192153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.192199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.192356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.192405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.192559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.192588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.192766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.192792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.192904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.192931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.193042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.193074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.193263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.193308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.193471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.193516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.193629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.193655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.193765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.193792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.193924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.193950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.194064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.194092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.194255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.194282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.194403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.194430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.194584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.194613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.194764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.194790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.194949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.194975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.195159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.195204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.195341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.195367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.195511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.195538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.195698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.195724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.195829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.195857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.195966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.195991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.196119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.196158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.196304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.196334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.196456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.196485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.196640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.196669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.196801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.196828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.196972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.196998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.197137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.197162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.197402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.197468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.197595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.197623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.321 qpair failed and we were unable to recover it. 00:33:42.321 [2024-07-26 02:09:24.197748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.321 [2024-07-26 02:09:24.197776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.197928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.197953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.198052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.198084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.198222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.198248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.198366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.198393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.198515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.198557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.198708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.198735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.198855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.198879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.199051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.199092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.199225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.199250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.199413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.199442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.199587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.199615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.199742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.199769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.199913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.199940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.200105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.200131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.200266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.200291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.200505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.200566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.200717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.200744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.200890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.200917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.201073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.201100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.201236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.201261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.201450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.201477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.201772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.201825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.202017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.202046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.202188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.202213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.202317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.202342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.202478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.202503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.202659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.202686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.202857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.202886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.203031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.203067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.203193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.203218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.203363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.203388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.203546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.203574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.322 qpair failed and we were unable to recover it. 00:33:42.322 [2024-07-26 02:09:24.203701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.322 [2024-07-26 02:09:24.203728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.203905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.203933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.204057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.204114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.204249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.204274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.204416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.204442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.204595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.204623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.204770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.204799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.204936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.204962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.205125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.205151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.205290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.205315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.205485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.205510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.205626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.205651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.205822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.205865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.206022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.206048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.206187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.206211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.206376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.206401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.206545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.206574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.206700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.206732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.206881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.206909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.207070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.207096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.207235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.207259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.207396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.207421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.207566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.207610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.207733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.207761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.207927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.207956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.208083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.208109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.208237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.208261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.208386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.208414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.208575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.208600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.208702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.208727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.208838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.208863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.209017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.209044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.209219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.209245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.209382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.209411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.209574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.209598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.209711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.209737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.209854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.209881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.210040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.210073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.210184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.210210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.210398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.323 [2024-07-26 02:09:24.210426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.323 qpair failed and we were unable to recover it. 00:33:42.323 [2024-07-26 02:09:24.210560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.210585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.210723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.210748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.210913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.210941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.211102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.211127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.211319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.211352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.211549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.211574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.211705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.211729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.211842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.211867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.212008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.212037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.212203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.212229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.212382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.212410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.212535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.212563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.212693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.212719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.212854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.212878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.213009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.213037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.213180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.213205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.213338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.213362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.213487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.213515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.213702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.213728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.213859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.213884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.213995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.214021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.214194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.214220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.214399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.214427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.214551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.214590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.214724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.214749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.214885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.214909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.215072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.215114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.215303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.215329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.215463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.215507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.215656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.215683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.215831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.215857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.215963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.215992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.216135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.216161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.216264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.216289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.216450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.216491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.216640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.216668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.216830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.216856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.216992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.217018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.217163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.217190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.324 [2024-07-26 02:09:24.217368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.324 [2024-07-26 02:09:24.217394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.324 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.217508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.217549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.217715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.217740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.217849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.217875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.218014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.218040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.218218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.218246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.218390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.218417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.218549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.218574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.218718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.218743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.218862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.218890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.219034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.219072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.219210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.219235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.219368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.219393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.219523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.219567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.219737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.219761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.219928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.219953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.220091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.220117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.220255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.220280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.220412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.220437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.220574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.220599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.220741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.220766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.220929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.220954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.221112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.221155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.221272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.221300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.221458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.221483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.221615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.221640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.221777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.221804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.221920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.221946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.222110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.222151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.222298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.222327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.222478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.222507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.222660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.222685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.222814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.222840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.222979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.223012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.223141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.223170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.223325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.223350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.223485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.223510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.223683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.223710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.223836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.223861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.224017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.224045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.224211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.224237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.325 qpair failed and we were unable to recover it. 00:33:42.325 [2024-07-26 02:09:24.224386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.325 [2024-07-26 02:09:24.224414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.224530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.224559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.224714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.224739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.224846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.224871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.225040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.225076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.225232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.225260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.225418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.225443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.225575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.225601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.225760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.225788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.225952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.225978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.226088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.226114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.226215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.226241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.226441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.226467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.226603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.226629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.226770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.226795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.226960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.226986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.227142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.227185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.227334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.227363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.227512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.227538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.227676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.227725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.227895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.227923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.228052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.228112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.228239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.228264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.228370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.228395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.228584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.228609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.228745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.228788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.228944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.228970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.229084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.229126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.229306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.229334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.229560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.229620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.229748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.229776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.229918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.229944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.230085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.230112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.230248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.230276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.230407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.230432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.230570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.230596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.230735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.230760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.230891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.326 [2024-07-26 02:09:24.230916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.326 qpair failed and we were unable to recover it. 00:33:42.326 [2024-07-26 02:09:24.231070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.231096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.231211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.231237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.231362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.231391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.231529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.231558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.231715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.231741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.231878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.231904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.232083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.232112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.232238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.232267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.232396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.232425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.232559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.232585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.232716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.232742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.232932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.232959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.233141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.233166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.233271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.233296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.233405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.233430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.233573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.233601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.233741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.233765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.233868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.233893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.234077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.234104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.234243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.234269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.234467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.234492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.234599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.234624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.234791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.234820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.235000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.235027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.235203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.235229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.235366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.235409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.235580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.235608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.235760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.235787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.235941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.235965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.236107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.236134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.236277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.236301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.236451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.236475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.236634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.236660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.236768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.236810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.236985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.237013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.237204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.237230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.237369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.237394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.237509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.237549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.237662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.237690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.237839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.237866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.238016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.238041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.238185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.238211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.238370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.238398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.238587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.238615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.238745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.238772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.238922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.327 [2024-07-26 02:09:24.238963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.327 qpair failed and we were unable to recover it. 00:33:42.327 [2024-07-26 02:09:24.239124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.239153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.239298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.239327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.239493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.239519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.239631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.239656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.239810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.239839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.239989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.240017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.240211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.240238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.240344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.240369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.240505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.240532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.240691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.240718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.240853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.240877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.241009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.241051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.241229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.241254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.241416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.241457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.241617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.241642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.241776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.241817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.241960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.241987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.242139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.242169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.242328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.242355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.242471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.242496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.242633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.242660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.242814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.242842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.242971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.242998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.243129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.243156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.243320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.243360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.243489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.243514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.243675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.243701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.243804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.243845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.243960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.243987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.244117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.244142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.244257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.244287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.244451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.244491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.244664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.244691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.244865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.244892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.245041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.245073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.245184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.245224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.245363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.245388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.245493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.245518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.245626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.245650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.245789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.245814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.245927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.245953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.246099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.246125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.246305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.246330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.246465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.246507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.246650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.246677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.246793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.246821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.246951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.246975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.247109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.247134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.247241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.247266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.247408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.247436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.247594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.247620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.247757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.247801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.247976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.248004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.248167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.248191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.248330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.248356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.248470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.248510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.248628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.248655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.248775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.248806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.248932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.248956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.249057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.249091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.249226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.249254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.249376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.249405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.249543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.328 [2024-07-26 02:09:24.249568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.328 qpair failed and we were unable to recover it. 00:33:42.328 [2024-07-26 02:09:24.249709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.249735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.249869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.249893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.250030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.250055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.250197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.250223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.250338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.250379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.250553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.250579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.250714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.250739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.250869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.250897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.251033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.251072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.251235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.251261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.251412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.251439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.251596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.251621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.251782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.251808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.251943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.251968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.252107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.252133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.252301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.252327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.252435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.252460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.252650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.252678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.252846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.252872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.253033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.253084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.253276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.253302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.253467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.253509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.253639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.253667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.253828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.253855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.253965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.253990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.254147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.254173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.254291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.254315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.254420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.254446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.254554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.254578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.254706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.254735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.254881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.254909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.255057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.255087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.255294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.255321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.255497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.255522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.255631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.255656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.255794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.255818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.255975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.256003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.256188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.256214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.256324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.256348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.256484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.256509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.329 [2024-07-26 02:09:24.256651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.329 [2024-07-26 02:09:24.256676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.329 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-26 02:09:24.256822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-26 02:09:24.256865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-26 02:09:24.257017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-26 02:09:24.257046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-26 02:09:24.257188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-26 02:09:24.257213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-26 02:09:24.257356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-26 02:09:24.257381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-26 02:09:24.257500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-26 02:09:24.257525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-26 02:09:24.257658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-26 02:09:24.257682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-26 02:09:24.257823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-26 02:09:24.257849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-26 02:09:24.257956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-26 02:09:24.257981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.258111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.258137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.258249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.258275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.258409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.258434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.258570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.258596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.258747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.258776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.258911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.258939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.259113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.259139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.259277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.259303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.259471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.259511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.259656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.259683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.259887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.259915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.260039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.260076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.260233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.260258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.260415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.260448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.260599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.260628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.260778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.260805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.260952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.260979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.261114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.261140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.261279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.261303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.261435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.261461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.261573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.261597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.261804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.261831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.261984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.262011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.262144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.262170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.262274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.262299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.262453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.262481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.262632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.262660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.262852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.262879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.262989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.263017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.263147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.263173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.263287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.263312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.263440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.263470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.263652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.263681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-26 02:09:24.263825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-26 02:09:24.263853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.264030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.264066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.264223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.264249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.264374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.264402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.264578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.264605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.264745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.264772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.264905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.264933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.265089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.265120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.265273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.265298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.265452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.265480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.265628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.265656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.265809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.265837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.265992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.266018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.266134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.266159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.266264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.266290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.266435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.266463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.266704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.266731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.266893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.266920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.267075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.267118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.267234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.267260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.267395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.267419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.267572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.267601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.267752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.267781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.267924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.267951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.268076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.268121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.268237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.268262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.268396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.268420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.268577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.268606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.268778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.268807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.268963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.268990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.269136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.269163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.269278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.269303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.269505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.269532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.269682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.269711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-26 02:09:24.269856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-26 02:09:24.269889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.270020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.270045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.270173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.270198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.270342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.270382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.270536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.270564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.270727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.270769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.270912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.270942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.271073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.271098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.271202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.271226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.271365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.271389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.271531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.271572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.271729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.271757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.271879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.271907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.272047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.272078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.272189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.272214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.272400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.272429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.272589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.272614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.272743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.272771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.272927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.272953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.273121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.273147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.273254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.273278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.273443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.273472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.273626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.273655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.273790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.273830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.273972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.274001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.274139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.274165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.274270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.274296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.274432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.274474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.274595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.274623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.274800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.274829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-26 02:09:24.274969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-26 02:09:24.275010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.275180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.275207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.275320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.275345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.275479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.275506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.275637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.275680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.275795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.275837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.275984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.276012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.276146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.276172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.276324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.276350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.276486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.276513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.276666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.276694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.276851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.276879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.277026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.277052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.277197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.277222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.277406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.277434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.277581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.277609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.277726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.277754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.277904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.277932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.278088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.278113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.278228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.278253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.278365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.278390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.278530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.278572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.278747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.278776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.278926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.278953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.279090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.279115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.279259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.279285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.279486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.279512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.279663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.279690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.279915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.279943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.280119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.280144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.280256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.280282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.280414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.280438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.280623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.280651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.280815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.280843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-26 02:09:24.280977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-26 02:09:24.281002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.281163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.281189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.281293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.281318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.281431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.281471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.281590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.281623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.281748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.281776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.282011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.282039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.282204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.282229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.282345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.282371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.282530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.282558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.282705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.282733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.282907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.282935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.283104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.283130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.283269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.283294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.283402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.283429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.283580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.283621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.283729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.283756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.283915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.283943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.284103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.284129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.284266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.284291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.284492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.284521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.284695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.284723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.284844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.284885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.285047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.285080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.285198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.285224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.285383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.285409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.285563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.285588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.285769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.285798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.285947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.285975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.286093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.286134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.286247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.286272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.286386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-26 02:09:24.286415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-26 02:09:24.286543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.286571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.286746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.286775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.286902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.286927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.287055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.287099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.287236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.287262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.287388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.287415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.287536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.287577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.287702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.287730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.287875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.287903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.288020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.288048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.288188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.288213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.288346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.288387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.288498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.288525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.288705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.288733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.288893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.288919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.289081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.289110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.289243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.289284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.289447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.289473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.289645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.289671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.289782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.289807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.289928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.289955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.290138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.290164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.290304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.290329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.290465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.290491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.290630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.290655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.290791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.290816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.290961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.290987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.291130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.291156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.291338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.291366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-26 02:09:24.291515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-26 02:09:24.291543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.291683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.291708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.291864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.291891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.292049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.292082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.292195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.292220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.292329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.292354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.292492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.292533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.292687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.292714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.292864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.292892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.293047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.293079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.293245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.293269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.293407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.293435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.293578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.293605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.293808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.293833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.293985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.294011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.294199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.294225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.294402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.294430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.294577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.294602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.294739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.294778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.294935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.294963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.295115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.295144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.295327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.295352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.295531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.295560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.295717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.295742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.295877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.295901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.296021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.296045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.296190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.296231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.296395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.296420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.296548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.296573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.296773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.296799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.296943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.296968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.297103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.297129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.297322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.297351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.297485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-26 02:09:24.297510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-26 02:09:24.297649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.297690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.297861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.297890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.298018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.298047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.298239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.298264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.298426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.298461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.298623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.298649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.298753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.298779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.298885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.298911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.299046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.299079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.299185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.299211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.299337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.299362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.299503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.299528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.299679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.299707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.299868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.299893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.300036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.300069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.300203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.300229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.300353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.300378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.300512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.300538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.300690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.300715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.300820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.300846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.300980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.301006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.301132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.301158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.301270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.301296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.301443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.301468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.301568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.301594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.301696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.301722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.301875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.301903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.302049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.302107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.302256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.302281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.302412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.302438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.302571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.302596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.302773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.302803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.302905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.302931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.303079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-26 02:09:24.303105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-26 02:09:24.303230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.303256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.303390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.303415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.303529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.303554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.303762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.303788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.303934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.303960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.304070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.304096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.304234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.304259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.304364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.304390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.304497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.304523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.304653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.304679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.304787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.304813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.304975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.305004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.305195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.305222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.305357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.305383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.305522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.305548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.305716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.305760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.305910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.305938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.306122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.306148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.306260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.306300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.306483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.306512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.306684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.306712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.306841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.306884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.307071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.307101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.307256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.307281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.307460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.307492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.307646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.307672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.307779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.307803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.307962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.307986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.308183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.308209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.308321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.308345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.308508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.308533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.308712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.308739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.308862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.308889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-26 02:09:24.309044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-26 02:09:24.309076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.309192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.309217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.309387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.309411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.309571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.309598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.309756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.309781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.309898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.309923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.310094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.310119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.310227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.310253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.310354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.310378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.310523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.310564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.310675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.310702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.310822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.310849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.311005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.311032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.311150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.311175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.311316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.311342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.311545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.311570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.311733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.311758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.311898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.311924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.312067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.312093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.312237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.312261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.312435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.312459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.312596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.312622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.312784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.312811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.312966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.313007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.313146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.313175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.313314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.313339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.313493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.313522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.313630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.313659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.313819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.313844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.313943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.313968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.314126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.314155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.314303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.314330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.314461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.314485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.314592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-26 02:09:24.314618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-26 02:09:24.314796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.314823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.314954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.314981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.315158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.315184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.315291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.315332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.315483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.315511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.315660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.315688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.315838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.315863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.315975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.316001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.316180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.316206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.316318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.316343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.316458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.316483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.316598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.316623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.316803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.316831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.316945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.316972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.317113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.317140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.317304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.317329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.317457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.317500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.317606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.317630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.317771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.317797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.317954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.317983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.318137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.318162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.318268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.318292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.318428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.318452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.318603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.318630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.318776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.318803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.318942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.318975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-26 02:09:24.319160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-26 02:09:24.319185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.319337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.319366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.319486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.319515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.319669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.319696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.319835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.319860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.319994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.320020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.320159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.320184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.320293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.320317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.320452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.320476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.320580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.320605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.320793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.320821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.320937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.320964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.321139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.321164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.321310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.321336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.321495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.321523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.321644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.321671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.321849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.321875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.322021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.322050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.322213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.322241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.322441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.322468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.322623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.322648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.322793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.322818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.322976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.323003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.323158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.323183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.323320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.323345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.323485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.323527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.323678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.323710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.323871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.323897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.324087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.324113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.324277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.324302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.324475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.324499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.324659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.324685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.324791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.324816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.324921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-26 02:09:24.324946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-26 02:09:24.325098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.325126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.325277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.325304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.325482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.325507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.325686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.325715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.325836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.325865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.326013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.326041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.326244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.326270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.326385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.326424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.326543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.326570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.326717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.326744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.326874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.326899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.327038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.327072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.327208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.327233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.327413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.327439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.327568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.327592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.327757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.327799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.327968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.327993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.328109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.328134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.328268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.328294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.328442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.328467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.328583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.328608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.328738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.328764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.328892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.328921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.329135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.329160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.329260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.329286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.329395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.329419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.329565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.329593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.329743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.329770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.329899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.329924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.330035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.330066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.330180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.330205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.330340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.330365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.330527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.330552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-26 02:09:24.330707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-26 02:09:24.330732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.330931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.330960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.331104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.331140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.331278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.331303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.331461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.331488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.331621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.331646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.331776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.331801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.331962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.331990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.332123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.332151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.332284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.332309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.332477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.332502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.332637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.332662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.332808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.332834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.332965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.332991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.333183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.333213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.333367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.333392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.333554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.333580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.333687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.333713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.333847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.333872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.334043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.334086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.334192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.334218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.334329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.334354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.334492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.334518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.334645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.334672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.334794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.334836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.334992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.335021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.335214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.335240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.335378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.335406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.335544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.335570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.335703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.335728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.335842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.335868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.335979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.336004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.336113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.336138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.336270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-26 02:09:24.336294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-26 02:09:24.336473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.336500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.336676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.336702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.336837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.336878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.337011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.337036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.337175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.337215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.337376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.337402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.337535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.337560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.337699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.337724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.337834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.337859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.338040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.338071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.338209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.338234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.338368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.338394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.338529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.338571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.338735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.338760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.338923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.338965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.339121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.339146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.339260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.339285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.339422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.339447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.339552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.339576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.339739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.339765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.339895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.339928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.340055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.340097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.340268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.340295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.340447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.340472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.340606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.340631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.340772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.340812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.340945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.340970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.341110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.341135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.341247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.341288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.341467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.341496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-26 02:09:24.341636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-26 02:09:24.341663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.341794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.341820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.341933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.341958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.342110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.342137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.342279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.342306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.342458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.342483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.342620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.342645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.342781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.342807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.342972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.342997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.343131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.343157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.343327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.343353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.343504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.343531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.343680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.343705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.343814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.343856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.344047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.344084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.344241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.344266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.344410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.344435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.344604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.344633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.344786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.344813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.344995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.345022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.345179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.345208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.345385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.345411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.345592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.345619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.345772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.345815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.345940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.345969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.346167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.346193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.346348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.346375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-26 02:09:24.346497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-26 02:09:24.346537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.346650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.346675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.346809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.346834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.346946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.346972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.347167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.347197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.347349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.347377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.347535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.347561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.347713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.347741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.347916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.347941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.348052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.348083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.348247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.348272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.348454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.348483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.348641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.348667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.348810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.348836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.348986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.349015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.349186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.349211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.349319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.349344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.349454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.349480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.349618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.349643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.349781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.349806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.349939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.349964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.350140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.350167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.350324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.350349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.350529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.350558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.350702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.350730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.350856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.350885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.351159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.351185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.351300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.351342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.351530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.351555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.351691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.351715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.351848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.351890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.352043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.352075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.352233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.352258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.352398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.352425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-26 02:09:24.352538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-26 02:09:24.352562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.352695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.352733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.352882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.352911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.353035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.353068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.353225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.353249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.353359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.353402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.353551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.353578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.353691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.353719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.353945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.353972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.354167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.354194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.354332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.354356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.354472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.354497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.354631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.354661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.354802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.354845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.355014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.355042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.355208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.355234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.355350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.355374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.355512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.355552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.355697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.355726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.355878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.355905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.356063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.356088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.356226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.356251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.356356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.356380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.356527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.356554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.356686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.356715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.356881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.356906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.357065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.357107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.357236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.357262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.357421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.357446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.357549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.357591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.357769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.357798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.357964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.357989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.358132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.358158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.358268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.632 [2024-07-26 02:09:24.358308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.632 qpair failed and we were unable to recover it. 00:33:42.632 [2024-07-26 02:09:24.358480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.358508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.358650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.358677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.358798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.358823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.358959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.358999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.359165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.359191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.359327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.359353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.359483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.359508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.359703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.359732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.359891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.359916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.360051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.360081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.360201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.360229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.360366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.360408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.360557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.360584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.360755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.360783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.360966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.360991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.361153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.361196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.361319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.361346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.361462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.361506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.361642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.361666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.361785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.361810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.361920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.361946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.362080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.362105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.362233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.362259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.362416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.362445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.362594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.362623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.362777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.362806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.362936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.362961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.363071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.363096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.363229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.363254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.363360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.363384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.363515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.363541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.363681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.363706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.363841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.363866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.633 qpair failed and we were unable to recover it. 00:33:42.633 [2024-07-26 02:09:24.364069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.633 [2024-07-26 02:09:24.364096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.364208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.364233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.364334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.364360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.364497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.364521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.364660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.364685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.364810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.364835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.364986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.365014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.365144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.365170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.365280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.365305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.365491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.365515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.365630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.365655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.365825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.365865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.366015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.366041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.366214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.366239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.366355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.366380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.366518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.366543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.366692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.366720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.366854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.366880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.367012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.367037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.367180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.367204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.367335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.367360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.367518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.367542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.367692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.367733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.367879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.367907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.368025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.368053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.368198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.368223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.368331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.368356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.368529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.368556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.368701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.368730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.368863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.368888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.369025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.369049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.369228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.369255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.369400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.369428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.369580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.634 [2024-07-26 02:09:24.369604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.634 qpair failed and we were unable to recover it. 00:33:42.634 [2024-07-26 02:09:24.369745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.369769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.369906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.369932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.370083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.370113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.370240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.370265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.370396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.370420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.370613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.370641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.370767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.370794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.370953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.370978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.371112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.371155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.371335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.371361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.371468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.371494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.371643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.371669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.371807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.371832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.371939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.371964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.372111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.372140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.372272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.372297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.372456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.372498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.372640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.372665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.372801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.372830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.372985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.373012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.373180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.373205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.373362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.373391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.373547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.373574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.373733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.373758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.373904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.373929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.374110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.374138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.374312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.374339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.374494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.374519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.374662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.374687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.374828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.635 [2024-07-26 02:09:24.374853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.635 qpair failed and we were unable to recover it. 00:33:42.635 [2024-07-26 02:09:24.374976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.375004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.375140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.375166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.375310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.375334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.375495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.375523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.375665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.375694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.375820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.375845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.375981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.376006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.376113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.376138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.376271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.376298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.376455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.376480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.376615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.376655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.376807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.376832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.376965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.376991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.377184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.377210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.377330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.377370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.377498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.377531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.377679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.377707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.377859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.377884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.378026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.378051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.378247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.378275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.378439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.378463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.378596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.378620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.378757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.378782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.378890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.378914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.379056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.379088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.379224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.379249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.379382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.379406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.379511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.379537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.379688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.379715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.379933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.379961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.380127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.380153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.636 qpair failed and we were unable to recover it. 00:33:42.636 [2024-07-26 02:09:24.380294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.636 [2024-07-26 02:09:24.380320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.380438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.380462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.380597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.380623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.380763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.380787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.380967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.380991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.381100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.381126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.381235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.381260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.381399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.381424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.381561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.381586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.381718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.381742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.381913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.381938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.382101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.382129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.382235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.382259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.382399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.382426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.382610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.382636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.382793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.382821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.382969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.382997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.383121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.383148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.383312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.383338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.383472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.383514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.383667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.383694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.383838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.383866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.384026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.384051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.384238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.384266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.384420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.384445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.384559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.384585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.384686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.384711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.384877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.384902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.385037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.385072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.385236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.385260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.385394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.385420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.385561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.385602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.385761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.385785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.385926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.637 [2024-07-26 02:09:24.385968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.637 qpair failed and we were unable to recover it. 00:33:42.637 [2024-07-26 02:09:24.386117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.386144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.386284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.386308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.386424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.386448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.386583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.386608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.386770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.386795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.386938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.386963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.387118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.387144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.387308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.387332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.387510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.387536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.387675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.387701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.387811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.387836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.387962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.387990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.388147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.388173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.388290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.388314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.388472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.388497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.388683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.388712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.388869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.388893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.389080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.389108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.389303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.389328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.389465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.389491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.389649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.389674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.389810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.389835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.389937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.389962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.390120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.390148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.390279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.390305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.390471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.390497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.390660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.390688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.390835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.390864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.390994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.391036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.391198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.391224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.391362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.391386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.391554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.391594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.391754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.391779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.391922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.638 [2024-07-26 02:09:24.391948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.638 qpair failed and we were unable to recover it. 00:33:42.638 [2024-07-26 02:09:24.392089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.392114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.392305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.392333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.392470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.392496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.392658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.392683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.392837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.392865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.393041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.393076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.393219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.393244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.393362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.393387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.393504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.393530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.393692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.393716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.393862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.393887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.394047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.394091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.394197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.394221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.394409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.394436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.394594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.394620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.394787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.394829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.394968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.394996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.395111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.395139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.395286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.395311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.395422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.395447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.395594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.395623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.395733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.395762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.395891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.395915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.396048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.396079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.396185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.396209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.396374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.396401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.396536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.396561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.396662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.396687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.396825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.396850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.396977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.397004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.397132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.397157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.397276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.397302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.397437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.397461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.639 [2024-07-26 02:09:24.397591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.639 [2024-07-26 02:09:24.397619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.639 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.397798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.397824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.397931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.397974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.398138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.398164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.398324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.398348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.398519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.398547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.398669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.398694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.398827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.398851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.398988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.399012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.399154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.399181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.399295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.399319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.399453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.399478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.399635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.399662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.399822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.399847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.399992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.400018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.400132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.400157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.400264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.400288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.400422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.400446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.400602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.400630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.400772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.400798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.400938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.400963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.401094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.401119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.401237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.401263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.401431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.401460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.401631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.401655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.401790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.401815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.401923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.401947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.402086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.402111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.402247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.402273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.402413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.402438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.402595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.402624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.402796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.402824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.402968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.402995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.403148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.640 [2024-07-26 02:09:24.403173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.640 qpair failed and we were unable to recover it. 00:33:42.640 [2024-07-26 02:09:24.403342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.403383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.403507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.403537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.403690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.403718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.403874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.403902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.404070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.404116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.404256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.404281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.404408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.404437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.404558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.404583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.404747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.404789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.404951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.404976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.405119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.405144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.405284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.405309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.405467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.405496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.405610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.405637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.405788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.405816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.405974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.405999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.406142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.406167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.406314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.406340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.406494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.406523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.406688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.406713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.406843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.406869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.406979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.407004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.407169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.407195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.407333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.407359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.407506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.407531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.407689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.407716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.407875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.407903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.408069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.408094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.408249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.641 [2024-07-26 02:09:24.408276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.641 qpair failed and we were unable to recover it. 00:33:42.641 [2024-07-26 02:09:24.408453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.408480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.408629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.408658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.408785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.408809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.408947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.408971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.409155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.409181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.409317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.409341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.409472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.409497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.409634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.409676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.409801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.409828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.409943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.409971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.410125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.410154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.410260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.410286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.410400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.410424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.410526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.410550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.410689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.410713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.410851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.410875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.411008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.411033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.411179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.411204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.411341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.411366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.411522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.411549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.411728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.411755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.411901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.411929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.412107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.412133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.412290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.412318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.412436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.412463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.412617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.412646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.412798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.412824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.413004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.413031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.413194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.413219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.413326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.413351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.413513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.413538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.413669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.413695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.413828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.413853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.642 [2024-07-26 02:09:24.414002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.642 [2024-07-26 02:09:24.414029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.642 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.414164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.414189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.414294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.414319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.414476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.414503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.414624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.414669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.414817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.414842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.414983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.415007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.415123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.415148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.415321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.415363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.415493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.415517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.415684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.415729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.415844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.415873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.416015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.416043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.416199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.416225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.416406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.416434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.416606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.416634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.416786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.416814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.416950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.416977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.417132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.417159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.417291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.417316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.417468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.417497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.417635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.417660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.417796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.417821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.418003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.418032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.418224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.418253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.418415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.418441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.418584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.418608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.418798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.418823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.418953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.418995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.419182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.419207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.419344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.419384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.419533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.419566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.419749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.419774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.419914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.643 [2024-07-26 02:09:24.419939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.643 qpair failed and we were unable to recover it. 00:33:42.643 [2024-07-26 02:09:24.420120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.420149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.420298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.420327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.420479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.420506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.420667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.420692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.420827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.420868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.421024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.421048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.421164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.421189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.421324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.421350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.421533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.421560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.421697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.421722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.421836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.421862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.422019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.422047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.422217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.422243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.422348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.422373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.422524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.422551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.422683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.422708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.422849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.422873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.423031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.423081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.423253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.423281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.423415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.423440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.423577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.423602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.423770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.423795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.423899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.423925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.424036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.424069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.424208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.424249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.424397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.424425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.424578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.424603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.424762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.424788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.424924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.424949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.425088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.425128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.425276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.425304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.425431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.425456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.425593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.425619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.644 [2024-07-26 02:09:24.425729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.644 [2024-07-26 02:09:24.425753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.644 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.425885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.425911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.426043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.426086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.426242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.426270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.426435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.426460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.426606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.426631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.426817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.426844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.427002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.427029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.427250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.427275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.427412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.427456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.427587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.427611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.427776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.427817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.427938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.427968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.428120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.428149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.428287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.428311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.428419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.428444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.428600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.428628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.428772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.428799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.428932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.428957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.429102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.429127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.429307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.429334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.429481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.429509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.429658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.429682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.429796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.429821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.429992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.430018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.430182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.430207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.430341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.430365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.430478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.430503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.430650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.430677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.430814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.430840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.430976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.431001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.431139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.431181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.431307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.431341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.431516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.645 [2024-07-26 02:09:24.431544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.645 qpair failed and we were unable to recover it. 00:33:42.645 [2024-07-26 02:09:24.431701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.431726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.431864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.431889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.431998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.432022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.432162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.432188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.432354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.432379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.432502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.432529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.432675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.432702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.432877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.432905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.433041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.433072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.433195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.433220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.433354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.433383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.433528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.433555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.433709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.433735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.433842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.433868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.434076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.434119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.434248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.434272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.434434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.434460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.434572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.434598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.434762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.434790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.434919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.434947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.435111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.435145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.435262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.435287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.435448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.435477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.435667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.435693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.435826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.435851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.436014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.436047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.436243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.436268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.436372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.436397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.646 [2024-07-26 02:09:24.436535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.646 [2024-07-26 02:09:24.436559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.646 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.436698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.436723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.436853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.436880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.437033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.437068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.437252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.437278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.437439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.437466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.437607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.437634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.437785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.437813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.437969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.437994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.438139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.438165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.438280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.438305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.438444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.438469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.438629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.438654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.438839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.438867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.438992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.439020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.439179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.439207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.439361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.439386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.439502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.439527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.439638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.439663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.439795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.439821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.439948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.439976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.440161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.440187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.440302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.440328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.440491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.440520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.440662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.440688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.440802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.440827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.440987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.441016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.441192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.441221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.441383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.441409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.441520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.441545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.441675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.441703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.441882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.441907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.442021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.442046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.442177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.442203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.647 qpair failed and we were unable to recover it. 00:33:42.647 [2024-07-26 02:09:24.442363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.647 [2024-07-26 02:09:24.442389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.442525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.442552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.442722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.442748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.442924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.442952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.443136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.443165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.443286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.443314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.443465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.443490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.443652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.443695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.443849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.443877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.443998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.444027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.444174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.444199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.444346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.444371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.444528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.444555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.444725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.444752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.444929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.444953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.445084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.445113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.445236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.445264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.445377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.445404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.445565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.445590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.445777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.445804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.445947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.445974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.446098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.446141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.446305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.446330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.446496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.446523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.446661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.446689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.446836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.446863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.447013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.447038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.447190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.447232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.447380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.447408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.447528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.447556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.447716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.447742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.447878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.447908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.448048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.648 [2024-07-26 02:09:24.448078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.648 qpair failed and we were unable to recover it. 00:33:42.648 [2024-07-26 02:09:24.448217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.448246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.448405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.448430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.448612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.448640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.448811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.448839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.448999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.449025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.449203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.449229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.449361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.449388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.449550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.449575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.449734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.449759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.449915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.449942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.450134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.450160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.450269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.450295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.450453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.450481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.450615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.450640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.450781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.450806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.450971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.450997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.451131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.451157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.451302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.451328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.451525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.451553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.451707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.451735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.451874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.451902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.452068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.452094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.452209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.452235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.452399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.452428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.452572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.452600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.452749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.452778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.452891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.452933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.453085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.453114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.453301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.453327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.453438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.453463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.453626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.453669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.453847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.453872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.454009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.454050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.649 [2024-07-26 02:09:24.454209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.649 [2024-07-26 02:09:24.454235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.649 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.454346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.454371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.454494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.454523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.454647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.454675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.454840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.454865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.454974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.455000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.455118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.455145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.455247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.455272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.455403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.455428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.455583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.455613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.455752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.455780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.455955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.455983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.456150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.456176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.456371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.456399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.456519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.456548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.456699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.456724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.456907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.456936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.457089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.457131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.457241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.457268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.457385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.457416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.457547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.457573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.457708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.457751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.457865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.457893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.458036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.458077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.458211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.458237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.458410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.458436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.458590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.458618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.458724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.458752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.458884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.458908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.459083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.459110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.459268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.459296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.459418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.459446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.459579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.459604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.650 [2024-07-26 02:09:24.459773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.650 [2024-07-26 02:09:24.459817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.650 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.459964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.459993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.460146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.460175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.460293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.460318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.460461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.460486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.460624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.460649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.460836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.460864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.460995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.461020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.461157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.461182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.461337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.461363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.461475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.461500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.461661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.461686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.461868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.461896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.462079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.462104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.462244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.462285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.462466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.462491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.462605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.462631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.462743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.462768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.462921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.462949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.463105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.463132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.463242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.463268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.463415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.463439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.463578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.463602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.463739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.463764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.463903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.463944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.464093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.464121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.464286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.464311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.464458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.464483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.464644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.464672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.651 [2024-07-26 02:09:24.464829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.651 [2024-07-26 02:09:24.464855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.651 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.464967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.464991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.465132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.465157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.465295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.465338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.465485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.465512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.465687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.465715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.465838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.465864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.466011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.466035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.466212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.466237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.466360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.466388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.466545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.466570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.466676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.466700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.466864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.466892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.467041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.467079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.467235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.467259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.467446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.467474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.467648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.467677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.467868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.467894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.468052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.468083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.468234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.468262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.468442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.468468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.468601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.468625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.468816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.468841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.468954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.468979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.469091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.469117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.469248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.469277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.469426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.469450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.469580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.469606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.469795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.469822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.469940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.469967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.470127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.470153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.470264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.470289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.470453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.470481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.652 [2024-07-26 02:09:24.470627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.652 [2024-07-26 02:09:24.470655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.652 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.470810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.470835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.470965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.470990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.471100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.471126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.471322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.471347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.471484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.471510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.471652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.471677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.471786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.471811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.471996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.472024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.472181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.472205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.472355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.472383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.472532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.472560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.472732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.472759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.472914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.472941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.473092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.473139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.473304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.473329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.473497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.473524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.473653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.473678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.473810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.473835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.473995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.474029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.474216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.474244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.474408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.474434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.474599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.474625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.474772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.474799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.474919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.474949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.475107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.475133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.475251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.475276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.475437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.475462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.475633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.475659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.475784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.475808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.475973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.475998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.476135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.476160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.476337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.476361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.476527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.476552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.653 qpair failed and we were unable to recover it. 00:33:42.653 [2024-07-26 02:09:24.476654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.653 [2024-07-26 02:09:24.476694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.476819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.476846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.477016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.477045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.477215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.477241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.477428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.477457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.477604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.477632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.477746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.477774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.477925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.477950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.478087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.478128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.478306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.478335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.478488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.478517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.478650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.478675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.478810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.478835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.479020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.479046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.479213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.479255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.479416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.479440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.479580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.479606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.479749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.479773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.479915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.479940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.480109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.480134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.480299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.480340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.480517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.480543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.480722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.480749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.480924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.480952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.481133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.481162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.481316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.481344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.481483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.481524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.481679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.481704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.481809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.481834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.482000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.482026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.482202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.482245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.482405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.482431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.482571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.482615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.482737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.654 [2024-07-26 02:09:24.482764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.654 qpair failed and we were unable to recover it. 00:33:42.654 [2024-07-26 02:09:24.482932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.482958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.483093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.483119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.483228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.483253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.483419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.483447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.483597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.483624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.483758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.483783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.483923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.483948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.484098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.484126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.484242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.484269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.484446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.484472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.484586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.484611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.484743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.484769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.484923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.484951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.485081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.485107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.485215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.485242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.485399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.485427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.485556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.485582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.485720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.485746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.485885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.485927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.486090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.486118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.486282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.486324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.486508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.486533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.486645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.486688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.486839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.486866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.486984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.487012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.487202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.487228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.487369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.487410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.487529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.487558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.487710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.487738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.487880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.487905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.488070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.488113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.488235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.488264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.488438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.488466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.488621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.655 [2024-07-26 02:09:24.488646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.655 qpair failed and we were unable to recover it. 00:33:42.655 [2024-07-26 02:09:24.488785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.488811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.488928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.488953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.489069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.489094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.489225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.489251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.489419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.489446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.489568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.489595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.489756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.489782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.489913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.489940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.490082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.490125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.490228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.490253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.490354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.490378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.490487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.490512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.490651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.490680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.490822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.490847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.491006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.491034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.491195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.491222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.491357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.491400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.491513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.491540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.491690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.491719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.491850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.491875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.492019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.492044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.492192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.492217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.492379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.492407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.492585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.492610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.492720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.492761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.492904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.492933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.493089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.493117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.493272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.493297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.493438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.493480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.493644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.656 [2024-07-26 02:09:24.493670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.656 qpair failed and we were unable to recover it. 00:33:42.656 [2024-07-26 02:09:24.493806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.493831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.493968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.493993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.494106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.494148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.494325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.494353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.494500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.494528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.494663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.494689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.494803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.494828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.494983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.495009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.495142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.495167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.495277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.495305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.495446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.495488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.495605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.495634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.495754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.495781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.495941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.495966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.496081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.496106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.496263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.496290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.496416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.496444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.496627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.496652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.496807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.496834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.496962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.496991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.497132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.497159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.497321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.497347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.497531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.497560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.497713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.497741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.497861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.497890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.498054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.498087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.498197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.498238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.498355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.498383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.498496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.498523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.498680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.498705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.498810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.498834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.498998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.499026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.499163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.657 [2024-07-26 02:09:24.499191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.657 qpair failed and we were unable to recover it. 00:33:42.657 [2024-07-26 02:09:24.499329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.499354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.499467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.499493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.499631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.499656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.499793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.499818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.499961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.499986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.500100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.500125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.500265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.500290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.500419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.500446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.500626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.500651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.500814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.500842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.500984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.501010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.501171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.501196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.501403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.501428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.501544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.501569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.501732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.501756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.501911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.501938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.502101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.502127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.502269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.502298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.502435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.502460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.502595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.502624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.502781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.502806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.502943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.502984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.503109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.503137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.503281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.503309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.503472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.503497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.503658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.503683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.503821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.503847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.503982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.504010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.504180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.504205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.504323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.504349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.504482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.504506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.504639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.504668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.504808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.504848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.505002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.505029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.658 [2024-07-26 02:09:24.505169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.658 [2024-07-26 02:09:24.505195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.658 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.505331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.505356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.505490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.505515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.505645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.505670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.505808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.505832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.505971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.506000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.506134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.506159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.506275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.506299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.506462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.506488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.506654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.506682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.506863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.506891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.507038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.507084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.507271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.507300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.507453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.507477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.507635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.507659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.507816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.507845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.507985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.508013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.508139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.508167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.508328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.508354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.508471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.508496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.508625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.508650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.508760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.508786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.508949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.508973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.509135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.509164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.509349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.509378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.509507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.509535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.509672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.509697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.509839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.509865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.509993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.510021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.510190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.510220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.510345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.510370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.510509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.510534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.510692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.510718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.510911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.510939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.511072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.511098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.511212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.511237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.511428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.511453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.659 [2024-07-26 02:09:24.511591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.659 [2024-07-26 02:09:24.511621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.659 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.511774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.511800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.511963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.511988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.512092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.512118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.512295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.512323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.512451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.512477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.512640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.512665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.512800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.512826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.513014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.513043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.513179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.513205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.513327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.513353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.513493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.513534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.513639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.513664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.513797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.513822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.513944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.513986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.514139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.514168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.514344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.514372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.514522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.514548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.514689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.514714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.514814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.514840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.514943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.514968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.515104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.515130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.515247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.515272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.515380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.515405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.515541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.515567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.515676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.515702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.515852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.515880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.515998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.516027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.516188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.516214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.516374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.516400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.516508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.516534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.516721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.516749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.516898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.516926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.517087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.517114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.517220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.517245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.517378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.660 [2024-07-26 02:09:24.517420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.660 qpair failed and we were unable to recover it. 00:33:42.660 [2024-07-26 02:09:24.517584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.517609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.517778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.517803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.517944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.517970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.518103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.518129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.518294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.518322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.518511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.518537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.518697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.518726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.518895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.518920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.519064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.519106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.519238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.519265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.519432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.519473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.519624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.519653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.519799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.519827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.519984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.520009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.520157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.520186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.520311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.520352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.520513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.520539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.520676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.520702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.520840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.520866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.521052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.521085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.521246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.521290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.521417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.521441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.521547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.521572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.521721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.521749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.521899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.521928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.522077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.522103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.522220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.522245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.522379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.522404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.522542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.522567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.522704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.522730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.522884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.522913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.523070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.523099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.661 [2024-07-26 02:09:24.523273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.661 [2024-07-26 02:09:24.523305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.661 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.523478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.523503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.523655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.523683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.523830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.523858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.523998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.524025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.524165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.524190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.524294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.524320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.524471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.524496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.524607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.524631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.524732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.524756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.524869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.524893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.525009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.525035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.525222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.525250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.525432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.525457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.525603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.525629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.525756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.525780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.525886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.525911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.526045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.526078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.526193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.526233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.526355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.526383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.526536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.526561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.526700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.526725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.526864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.526907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.527054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.527089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.527237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.527264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.527394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.527419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.527558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.527582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.527736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.527768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.527947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.527975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.528125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.528151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.528265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.528290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.528424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.528449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.528597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.528637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.528741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.528765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.528909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.528934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.529103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.529131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.529321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.529347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.529481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.529507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.529695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.529723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.662 [2024-07-26 02:09:24.529839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.662 [2024-07-26 02:09:24.529868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.662 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.529984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.530024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.530179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.530204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.530389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.530416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.530568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.530596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.530743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.530771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.530931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.530956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.531129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.531155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.531282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.531308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.531451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.531479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.531646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.531671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.531804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.531829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.531953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.531981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.532102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.532131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.532268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.532294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.532401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.532430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.532574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.532602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.532756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.532783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.532942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.532966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.533136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.533165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.533344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.533372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.533516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.533543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.533695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.533719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.533834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.533859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.534044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.534079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.534203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.534230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.534392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.534417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.534558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.534583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.534745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.534770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.534928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.534956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.535096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.535123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.535261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.535286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.535436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.535465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.535588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.535616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.535745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.535770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.535882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.535909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.536067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.536095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.536244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.536271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.536405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.536430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.663 qpair failed and we were unable to recover it. 00:33:42.663 [2024-07-26 02:09:24.536544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.663 [2024-07-26 02:09:24.536569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.536695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.536724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.536897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.536925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.537121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.537146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.537317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.537342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.537479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.537504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.537676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.537705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.537843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.537867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.538031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.538057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.538269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.538298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.538440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.538466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.538624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.538648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.538806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.538834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.538986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.539014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.539188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.539215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.539356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.539381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.539485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.539511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.539624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.539653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.539813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.539838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.539977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.540020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.540188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.540213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.540355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.540380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.540542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.540584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.540718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.540742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.540884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.540909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.541072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.541101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.541253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.541281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.541438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.541462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.541626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.541651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.541790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.541815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.541962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.541987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.542125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.542152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.542261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.542301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.542417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.542444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.542585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.542613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.542768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.542792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.542924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.542949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.543075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.543104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.543231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.664 [2024-07-26 02:09:24.543258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.664 qpair failed and we were unable to recover it. 00:33:42.664 [2024-07-26 02:09:24.543391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.543416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.543560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.543584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.543722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.543747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.543884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.543911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.544020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.544044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.544229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.544265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.544442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.544470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.544620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.544648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.544828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.544853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.545012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.545039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.545205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.545231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.545349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.545374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.545509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.545534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.545651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.545676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.545784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.545810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.545922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.545947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.546109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.546135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.546248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.546274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.546437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.546462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.546650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.546677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.546830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.546856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.547015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.547043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.547201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.547229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.547367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.547395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.547548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.547573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.547678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.547703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.547838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.547867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.548017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.548044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.548208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.548233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.548371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.548396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.548599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.548625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.548766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.548790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.548905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.548935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.549049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.549082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.549251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.549279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.549435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.549460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.549594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.549620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.549752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.549777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.549917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.549956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.665 qpair failed and we were unable to recover it. 00:33:42.665 [2024-07-26 02:09:24.550125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.665 [2024-07-26 02:09:24.550151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.550260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.550285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.550426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.550451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.550604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.550632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.550799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.550824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.550954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.550981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.551153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.551179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.551297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.551322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.551459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.551483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.551655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.551680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.551794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.551820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.551973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.552000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.552163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.552190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.552299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.552323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.552458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.552482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.552589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.552613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.552754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.552779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.552929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.552957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.553165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.553191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.553331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.553356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.553470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.553495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.553683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.553710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.553853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.553878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.554018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.554043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.554224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.554253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.554404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.554430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.554609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.554637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.554775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.554803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.554964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.554990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.555163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.555189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.555309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.555338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.555487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.555515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.555689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.555717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.555876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.555901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.556056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.556101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.556251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.556283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.666 [2024-07-26 02:09:24.556411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.666 [2024-07-26 02:09:24.556442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.666 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.556627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.556654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.556836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.556866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.557014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.557042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.557215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.557241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.557351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.557377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.557541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.557585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.557740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.557766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.557902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.557928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.558066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.558110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.558214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.558240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.558346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.558376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.558534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.558563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.558754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.558780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.558938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.558967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.559118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.559148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.559264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.559294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.559419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.559445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.559581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.559607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.559710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.559736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.559919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.559947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.560104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.560131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.560241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.560267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.560373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.560399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.560573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.560603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.560738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.560764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.667 [2024-07-26 02:09:24.560898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.667 [2024-07-26 02:09:24.560924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.667 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.561092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.561135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.561250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.561276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.561389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.561415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.561555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.561581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.561768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.561797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.561919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.561949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.562110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.562138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.562273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.562299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.562424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.562453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.562596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.562625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.562764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.562794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.562946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.562975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.563135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.563162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.563296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.563322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.563494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.563521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.563657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.563698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.563848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.563877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.564023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.564053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.564190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.564216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.564380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.564407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.564547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.564574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.564683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.564709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.564853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.564879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.565041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.565072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.565236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.565271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.565424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.565453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.565609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.565635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.565737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.565764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.565961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.565987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.566126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.566170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.566328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.566354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.566467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.566493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.668 [2024-07-26 02:09:24.566630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.668 [2024-07-26 02:09:24.566656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.668 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.566787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.566817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.566968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.566998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.567139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.567167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.567308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.567333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.567519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.567548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.567747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.567773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.567926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.567955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.568080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.568110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.568266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.568295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.568427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.568454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.568564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.568591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.568730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.568756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.568923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.568949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.569119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.569145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.569257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.569283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.569444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.569486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.569632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.569661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.569844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.569870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.570023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.570063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.570215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.570244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.570412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.570438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.570544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.570570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.570737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.570781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.570960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.570989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.571136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.571167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.571330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.571355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.571494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.571521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.571700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.571759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.571962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.572015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.572159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.572186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.572297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.572323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.572459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.572485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.572626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.572652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.572764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.572790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.572928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.572970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.573142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.573170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.573282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.573309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.573416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.573442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.669 [2024-07-26 02:09:24.573577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.669 [2024-07-26 02:09:24.573603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.669 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.573763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.573792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.573972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.573999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.574139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.574166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.574299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.574345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.574510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.574536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.574646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.574672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.574811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.574836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.574941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.574967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.575146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.575172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.575310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.575352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.575507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.575533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.575641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.575668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.575835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.575866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.576046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.576083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.576238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.576264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.576395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.576436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.576586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.576615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.576735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.576763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.576915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.576941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.577080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.577128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.577254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.577283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.577444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.577471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.577608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.577634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.577766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.577809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.577982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.578010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.578172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.578199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.578302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.578328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.578492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.578517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.578648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.578676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.578836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.578862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.579001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.579026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.579147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.579174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.579312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.579337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.579492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.579581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.579739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.579765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.579918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.579947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.580124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.580153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.580328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.580356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.580487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.580512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.580615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.580640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.580765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.580794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.670 qpair failed and we were unable to recover it. 00:33:42.670 [2024-07-26 02:09:24.580913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.670 [2024-07-26 02:09:24.580942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.581124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.581150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.581254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.581279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.581429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.581457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.581613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.581639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.581786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.581812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.581992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.582020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.582169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.582197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.582345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.582374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.582532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.582558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.582739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.582768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.582909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.582936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.583048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.583082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.583216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.583242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.583354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.583381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.583545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.583586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.583708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.583736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.583872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.583899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.584008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.584038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.584229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.584274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.584486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.584539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.584704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.584731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.584840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.584866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.585001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.585027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.585183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.585210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.585345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.585371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.585510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.585538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.585786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.585840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.586012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.586038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.586190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.586216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.586395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.586423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.586604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.586655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.586873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.586923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.587111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.587137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.587244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.587270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.671 [2024-07-26 02:09:24.587405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.671 [2024-07-26 02:09:24.587431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.671 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.587697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.587760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.587916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.587941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.588057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.588093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.588231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.588257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.588477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.588538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.588697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.588723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.588864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.588908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.589067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.589094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.589221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.589247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.589380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.589406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.589544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.589589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.589767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.589796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.589948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.589976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.590129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.590155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.590289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.590314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.590426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.590454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.590662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.590716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.590845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.590870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.591018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.591045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.591216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.591245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.591453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.591501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.591638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.591664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.591804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.591850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.592026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.592054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.592181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.592210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.592343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.592369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.592508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.592533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.592666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.592693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.592867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.592893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.593031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.593056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.593213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.593243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.593388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.593417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.593567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.593595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.593725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.593751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.593881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.593906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.594032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.594066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.594230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.594256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.594363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.594388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.594491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.594517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.594657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.594682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.672 [2024-07-26 02:09:24.594827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.672 [2024-07-26 02:09:24.594855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.672 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.594990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.595016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.595155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.595182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.595278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.595304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.595461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.595489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.595640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.595666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.595792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.595818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.595966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.595993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.596111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.596138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.596284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.596310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.596442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.596485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.596677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.596729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.596889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.596918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.597046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.597081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.597242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.597286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.597406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.597436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.597554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.597583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.597764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.597789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.597925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.597952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.598115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.598159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.598307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.598335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.598496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.598521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.673 [2024-07-26 02:09:24.598623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.673 [2024-07-26 02:09:24.598652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.673 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.598805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.598834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.598962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.598991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.599138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.599166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.599278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.599320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.599507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.599532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.599632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.599658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.599799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.599825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.599959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.599985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.600101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.600127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.600239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.600265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.600401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.600426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.600564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.600590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.600711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.600740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.600902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.600931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.601099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.601125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.601262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.601288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.601426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.601451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.601604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.601646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.601747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.601772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.601895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.601921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.602071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.602100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.602228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.602256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.602390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.602416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.602573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.602599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.602726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.602755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.602904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.602933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.603096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.603122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.603230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.603256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.603418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.603447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.603567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.962 [2024-07-26 02:09:24.603595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.962 qpair failed and we were unable to recover it. 00:33:42.962 [2024-07-26 02:09:24.603729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.603756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.603886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.603912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.604071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.604100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.604253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.604281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.604414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.604440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.604576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.604601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.604744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.604773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.604934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.604960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.605098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.605124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.605233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.605280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.605423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.605451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.605581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.605609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.605738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.605763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.605903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.605928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.606031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.606056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.606203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.606231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.606364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.606390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.606526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.606552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.606659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.606684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.606846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.606873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.607010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.607035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.607180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.607206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.607335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.607379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.607492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.607520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.607701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.607726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.607905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.607933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.608078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.608121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.608253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.608278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.608388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.608413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.608522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.608547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.608707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.608732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.608864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.963 [2024-07-26 02:09:24.608892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.963 qpair failed and we were unable to recover it. 00:33:42.963 [2024-07-26 02:09:24.609054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.609086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.609221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.609264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.609420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.609448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.609593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.609621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.609767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.609792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.609927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.609952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.610113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.610143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.610287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.610316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.610472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.610498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.610673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.610701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.610852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.610880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.611068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.611094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.611250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.611275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.611425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.611454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.611604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.611633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.611783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.611812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.611995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.612021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.612182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.612215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.612336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.612377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.612512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.612538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.612648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.612674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.612798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.612824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.612957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.612986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.613145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.613174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.613327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.613353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.613456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.613482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.613594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.613619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.613793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.613818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.613923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.613950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.614083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.614109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.614311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.964 [2024-07-26 02:09:24.614337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.964 qpair failed and we were unable to recover it. 00:33:42.964 [2024-07-26 02:09:24.614445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.614470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.614634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.614660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.614796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.614821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.614932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.614958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.615098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.615124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.615251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.615277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.615383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.615409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.615596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.615624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.615768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.615796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.615927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.615953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.616115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.616141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.616273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.616299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.616431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.616459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.616619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.616645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.616752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.616778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.616956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.616982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.617116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.617143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.617305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.617331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.617491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.617520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.617677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.617704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.617845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.617871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.618008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.618034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.618173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.618199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.618349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.618379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.618526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.618555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.618714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.618740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.618872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.618919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.619084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.619111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.619217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.619243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.619378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.619405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.619539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.619565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.619696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.619722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.619886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.619916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.965 [2024-07-26 02:09:24.620096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.965 [2024-07-26 02:09:24.620122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.965 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.620232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.620259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.620375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.620404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.620578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.620606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.620764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.620790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.620929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.620972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.621086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.621129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.621303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.621329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.621468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.621493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.621635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.621661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.621792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.621817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.621970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.621998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.622159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.622186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.622366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.622395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.622514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.622544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.622720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.622748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.622866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.622892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.623065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.623107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.623249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.623277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.623428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.623456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.623618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.623643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.623778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.623821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.623960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.623988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.624146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.624175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.624325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.624352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.624464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.624489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.624652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.624678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.624838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.624866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.624997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.625024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.625171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.625197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.625304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.625346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.625507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.625536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.625701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.625727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.625864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.966 [2024-07-26 02:09:24.625893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.966 qpair failed and we were unable to recover it. 00:33:42.966 [2024-07-26 02:09:24.625998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.626024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.626191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.626217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.626379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.626405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.626563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.626592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.626747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.626775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.626948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.626976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.627125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.627152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.627299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.627325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.627492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.627520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.627668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.627696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.627832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.627858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.627969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.627995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.628158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.628201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.628362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.628391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.628511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.628536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.628669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.628695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.628882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.628910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.629138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.629168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.629291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.629317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.629427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.629452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.629613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.629642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.629792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.629820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.629975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.630001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.630132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.630174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.630296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.630325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.630477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.630505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.630681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.630707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.967 [2024-07-26 02:09:24.630843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.967 [2024-07-26 02:09:24.630869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.967 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.631029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.631064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.631176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.631205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.631343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.631368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.631475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.631500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.631629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.631655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.631794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.631820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.632016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.632041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.632183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.632209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.632322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.632347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.632480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.632505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.632641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.632667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.632828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.632875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.632999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.633027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.633169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.633198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.633348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.633373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.633480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.633506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.633645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.633672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.633829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.633857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.634042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.634078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.634211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.634237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.634346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.634371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.634532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.634558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.634690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.634715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.634821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.634847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.635002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.635031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.635180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.635207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.635341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.635367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.635509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.635551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.635704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.635732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.635910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.635939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.636070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.636096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.636206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.636231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.636375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.968 [2024-07-26 02:09:24.636400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.968 qpair failed and we were unable to recover it. 00:33:42.968 [2024-07-26 02:09:24.636535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.636564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.636701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.636728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.636836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.636862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.637001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.637027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.637171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.637200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.637335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.637361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.637494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.637519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.637726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.637751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.637885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.637911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.638113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.638140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.638277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.638302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.638438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.638464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.638596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.638622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.638735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.638761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.638889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.638918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.639039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.639074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.639256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.639282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.639388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.639415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.639549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.639595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.639740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.639769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.639927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.639953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.640085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.640111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.640218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.640244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.640372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.640397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.640519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.640548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.640682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.640708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.640841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.640867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.641032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.641066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.641189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.641218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.641381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.641406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.641520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.641546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.641675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.641701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.641837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.641866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.969 [2024-07-26 02:09:24.641989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.969 [2024-07-26 02:09:24.642015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.969 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.642193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.642219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.642329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.642373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.642498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.642526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.642676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.642702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.642814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.642839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.643004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.643029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.643175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.643204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.643365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.643392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.643500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.643526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.643684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.643710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.643900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.643928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.644091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.644117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.644276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.644301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.644449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.644477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.644653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.644678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.644827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.644852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.645034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.645074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.645221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.645249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.645423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.645483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.645613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.645638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.645778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.645807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.645969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.645997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.646123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.646153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.646317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.646343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.646523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.646555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.646695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.646723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.646876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.646901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.647067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.647094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.647277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.647305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.647483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.647508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.647641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.647666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.647831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.647857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.970 qpair failed and we were unable to recover it. 00:33:42.970 [2024-07-26 02:09:24.647994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.970 [2024-07-26 02:09:24.648019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.648186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.648211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.648357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.648385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.648548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.648573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.648709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.648752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.648899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.648927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.649075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.649101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.649235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.649261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.649364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.649389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.649544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.649572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.649720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.649749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.649870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.649896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.650030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.650055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.650212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.650240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.650406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.650432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.650581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.650606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.650759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.650788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.650982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.651010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.651209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.651236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.651373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.651399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.651532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.651557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.651669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.651694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.651823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.651864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.652004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.652029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.652142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.652168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.652306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.652331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.652478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.652504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.652644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.652671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.971 qpair failed and we were unable to recover it. 00:33:42.971 [2024-07-26 02:09:24.652775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.971 [2024-07-26 02:09:24.652816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.652964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.652992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.653136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.653165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.653316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.653342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.653478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.653524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.653669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.653698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.653869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.653897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.654020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.654045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.654187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.654213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.654357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.654383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.654492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.654519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.654679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.654705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.654857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.654886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.655083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.655113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.655268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.655297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.655454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.655480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.655590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.655615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.655784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.655809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.655924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.655949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.656066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.656092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.656254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.656297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.656478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.656504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.656635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.656660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.656829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.656854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.656967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.656993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.657134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.657160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.657320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.657363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.657517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.657543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.972 [2024-07-26 02:09:24.657682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.972 [2024-07-26 02:09:24.657724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.972 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.657898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.657927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.658045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.658080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.658232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.658257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.658372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.658398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.658532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.658557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.658700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.658728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.658841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.658867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.658976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.659002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.659168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.659194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.659368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.659393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.659505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.659532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.659692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.659737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.659877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.659903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.660070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.660113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.660249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.660276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.660406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.660432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.660571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.660600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.660720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.660748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.660871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.660897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.661026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.661052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.661216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.661245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.661368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.661396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.661552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.661578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.661692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.661719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.661861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.661887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.662042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.662077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.662231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.662258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.662394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.662419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.662578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.662607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.662758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.662786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.662944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.662970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.663105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.663131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.663278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.973 [2024-07-26 02:09:24.663306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.973 qpair failed and we were unable to recover it. 00:33:42.973 [2024-07-26 02:09:24.663419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.663448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.663626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.663652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.663803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.663832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.663990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.664032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.664183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.664210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.664383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.664409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.664566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.664595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.664715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.664744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.664931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.664956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.665093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.665123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.665262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.665305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.665464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.665491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.665593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.665619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.665728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.665755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.665912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.665955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.666085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.666115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.666256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.666285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.666445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.666471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.666604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.666630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.666738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.666764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.666905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.666931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.667106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.667133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.667241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.667266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.667398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.667423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.667574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.667602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.667726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.667752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.667887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.667913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.668078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.668105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.668266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.668308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.668467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.668493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.668622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.668664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.668784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.668812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.668969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.668996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.974 [2024-07-26 02:09:24.669131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.974 [2024-07-26 02:09:24.669158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.974 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.669326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.669372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.669491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.669535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.669700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.669725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.669892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.669918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.670055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.670087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.670272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.670301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.670450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.670476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.670616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.670641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.670742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.670767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.670971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.670996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.671103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.671129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.671232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.671258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.671368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.671395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.671563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.671589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.671724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.671752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.671906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.671937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.672123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.672152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.672284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.672310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.672424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.672450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.672591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.672616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.672744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.672789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.672931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.672960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.673139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.673168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.673302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.673328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.673490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.673531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.673709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.673738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.673854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.673883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.674037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.674069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.674207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.674249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.674411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.674439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.674602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.674628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.674756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.674782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.674935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.975 [2024-07-26 02:09:24.674964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.975 qpair failed and we were unable to recover it. 00:33:42.975 [2024-07-26 02:09:24.675108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.675138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.675292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.675321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.675456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.675482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.675589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.675615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.675786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.675812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.675944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.675970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.676175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.676202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.676342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.676369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.676531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.676561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.676723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.676753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.676945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.676972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.677129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.677159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.677344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.677374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.677521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.677587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.677780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.677807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.677961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.677991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.678144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.678174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.678324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.678354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.678509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.678536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.678640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.678667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.678846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.678876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.679055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.679088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.679198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.679228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.679374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.679417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.679567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.679596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.679751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.679780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.679935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.679962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.680106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.680149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.680301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.680331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.680497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.680524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.680690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.680716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.680897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.680926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.681099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.681126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.976 [2024-07-26 02:09:24.681264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.976 [2024-07-26 02:09:24.681290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.976 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.681402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.681428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.681591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.681617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.681778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.681808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.681961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.681990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.682148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.682175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.682356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.682385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.682545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.682574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.682685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.682714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.682844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.682870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.683001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.683027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.683237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.683268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.683458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.683488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.683675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.683702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.683858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.683889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.684070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.684100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.684261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.684290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.684423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.684449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.684612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.684639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.684806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.684835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.684954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.684984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.685146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.685174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.685351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.685381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.685531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.685561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.685748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.685775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.685939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.685965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.686080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.686125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.686277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.686306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.686436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.686463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.686602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.977 [2024-07-26 02:09:24.686633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.977 qpair failed and we were unable to recover it. 00:33:42.977 [2024-07-26 02:09:24.686765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.686808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.686925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.686954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.687112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.687142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.687269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.687295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.687396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.687422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.687583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.687612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.687745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.687773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.687936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.687963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.688109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.688139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.688292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.688321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.688464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.688493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.688649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.688676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.688790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.688817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.688995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.689025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.689163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.689191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.689349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.689376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.689533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.689563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.689688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.689718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.689877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.689904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.690084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.690112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.690306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.690336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.690461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.690490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.690670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.690699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.690857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.690884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.691030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.691067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.691214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.691243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.691365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.691396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.691555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.691582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.691725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.691769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.691911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.691938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.692048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.692081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.692223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.692250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.692415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.692441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.978 [2024-07-26 02:09:24.692577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.978 [2024-07-26 02:09:24.692606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.978 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.692758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.692787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.692939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.692966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.693069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.693096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.693258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.693289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.693416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.693446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.693593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.693624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.693734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.693761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.693949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.693976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.694121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.694148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.694253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.694280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.694443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.694469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.694632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.694662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.694796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.694822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.694996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.695025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.695208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.695235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.695384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.695413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.695594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.695652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.695788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.695815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.695948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.695975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.696123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.696150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.696315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.696341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.696544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.696571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.696725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.696755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.696878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.696908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.697032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.697067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.697202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.697229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.697392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.697419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.697574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.697604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.697730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.697759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.697897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.697925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.698089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.698118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.698311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.979 [2024-07-26 02:09:24.698340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.979 qpair failed and we were unable to recover it. 00:33:42.979 [2024-07-26 02:09:24.698495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.698526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.698677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.698704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.698815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.698842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.699008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.699035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.699282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.699334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.699493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.699520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.699652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.699679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.699845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.699871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.700002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.700028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.700149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.700176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.700319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.700346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.700459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.700487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.700626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.700652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.700793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.700823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.700931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.700958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.701144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.701171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.701308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.701335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.701484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.701510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.701651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.701677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.701818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.701844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.701981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.702010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.702183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.702210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.702321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.702348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.702484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.702512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.702644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.702673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.702862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.702889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.703042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.703094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.703218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.703262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.703405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.703432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.703570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.703597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.703768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.703797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.703922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.703952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.704105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.704135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.980 [2024-07-26 02:09:24.704293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.980 [2024-07-26 02:09:24.704319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.980 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.704456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.704483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.704650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.704679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.704840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.704869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.705002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.705029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.705198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.705225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.705362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.705389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.705502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.705529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.705702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.705729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.705873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.705917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.706069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.706099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.706258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.706288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.706447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.706474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.706610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.706651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.706803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.706833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.706990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.707020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.707208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.707234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.707394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.707423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.707586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.707614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.707753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.707780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.707963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.707997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.708170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.708197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.708332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.708359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.708468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.708495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.708661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.708687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.708846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.708876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.709026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.709055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.709225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.709254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.709388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.981 [2024-07-26 02:09:24.709415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.981 qpair failed and we were unable to recover it. 00:33:42.981 [2024-07-26 02:09:24.709577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.709622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.709772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.709801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.709932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.709961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.710118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.710145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.710279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.710323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.710504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.710533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.710696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.710725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.710878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.710905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.711045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.711096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.711216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.711246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.711390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.711419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.711586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.711613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.711725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.711767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.711916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.711945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.712101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.712132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.712294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.712321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.712504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.712533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.712687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.712716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.712883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.712910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.713049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.713096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.713241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.713284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.713461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.713491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.713611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.713641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.713798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.713824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.713967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.714011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.714196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.714226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.714380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.714410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.714566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.714593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.714732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.714776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.714923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.714953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.715083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.715114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.715297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.715328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.715485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.715515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.715691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.715721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.715846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.982 [2024-07-26 02:09:24.715876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.982 qpair failed and we were unable to recover it. 00:33:42.982 [2024-07-26 02:09:24.716095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.716123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.716302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.716331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.716499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.716526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.716662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.716706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.716865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.716891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.717005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.717032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.717185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.717215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.717394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.717423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.717555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.717582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.717720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.717746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.717905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.717934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.718053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.718105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.718248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.718274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.718413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.718456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.718631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.718658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.718838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.718867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.719047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.719080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.719195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.719239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.719414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.719444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.719596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.719626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.719779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.719806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.719940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.719967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.720110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.720140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.720289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.720318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.720455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.720482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.720623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.720649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.720767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.720794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.720949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.720979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.721165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.983 [2024-07-26 02:09:24.721192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.983 qpair failed and we were unable to recover it. 00:33:42.983 [2024-07-26 02:09:24.721305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.721349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.721511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.721538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.721701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.721744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.721894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.721921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.722028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.722055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.722229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.722258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.722481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.722510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.722666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.722697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.722805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.722831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.722938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.722965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.723127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.723157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.723317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.723344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.723506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.723532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.723719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.723746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.723883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.723925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.724087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.724114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.724246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.724272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.724400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.724430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.724597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.724624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.724755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.724782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.724896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.724923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.725081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.725109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.725253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.725282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.725467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.725494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.725606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.725633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.725797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.725823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.725958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.725985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.726147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.726175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.726329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.726358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.726507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.726537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.726690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.726719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.726854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.726881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.727026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.727052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.727194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.727222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.727263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1553ef0 (9): Bad file descriptor 00:33:42.984 [2024-07-26 02:09:24.727416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.984 [2024-07-26 02:09:24.727455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.984 qpair failed and we were unable to recover it. 00:33:42.984 [2024-07-26 02:09:24.727626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.727672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.727800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.727846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.728008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.728036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.728196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.728235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.728399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.728429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.728579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.728608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.728829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.728859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.728979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.729006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.729111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.729138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.729278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.729305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.729453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.729483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.729703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.729755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.729954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.730024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.730181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.730209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.730373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.730418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.730580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.730624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.730809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.730852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.730988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.731014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.731181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.731226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.731384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.731429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.731579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.731623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.731762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.731788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.731903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.731930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.732114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.732145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.732300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.732329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.732468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.732500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.732638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.732665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.732827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.732854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.732962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.732989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.733136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.733176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.733309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.733340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.733519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.733549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.733693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.733722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.733871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.733902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.734073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.985 [2024-07-26 02:09:24.734101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.985 qpair failed and we were unable to recover it. 00:33:42.985 [2024-07-26 02:09:24.734242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.734269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.734424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.734454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.734626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.734656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.734855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.734913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.735078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.735107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.735268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.735295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.735431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.735475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.735752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.735806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.735948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.735975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.736088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.736115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.736307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.736338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.736491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.736520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.736651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.736695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.736868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.736898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.737076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.737121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.737235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.737261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.737384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.737415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.737576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.737609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.737806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.737862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.738025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.738053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.738203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.738231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.738389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.738440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.738696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.738749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.738891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.738918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.739071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.739115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.739269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.739316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.739472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.739517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.739674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.739719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.739854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.739880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.740016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.740043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.740208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.740251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.740403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.740447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.740635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.740665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.740845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.740871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.986 [2024-07-26 02:09:24.741010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.986 [2024-07-26 02:09:24.741039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.986 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.741163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.741189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.741299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.741324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.741518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.741547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.741696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.741725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.741856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.741885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.742068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.742108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.742231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.742260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.742446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.742490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.742695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.742740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.742904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.742935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.743101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.743128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.743285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.743329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.743469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.743497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.743626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.743672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.743837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.743865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.744001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.744027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.744236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.744280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.744443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.744475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.744635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.744662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.744902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.744961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.745144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.745171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.745304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.745334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.745495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.745524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.745711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.745775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.745949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.745978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.746143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.746171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.746326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.746356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.746483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.746514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.746721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.746776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.746931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.746958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.747090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.747127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.747236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.747263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.747430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.747457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.747618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.747648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.747800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.747829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.987 [2024-07-26 02:09:24.748010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.987 [2024-07-26 02:09:24.748039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.987 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.748189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.748234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.748421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.748451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.748629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.748674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.748836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.748880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.749057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.749102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.749246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.749274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.749432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.749461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.749590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.749632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.749863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.749915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.750049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.750085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.750227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.750254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.750407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.750437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.750626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.750655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.750987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.751053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.751244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.751271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.751419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.751446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.751704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.751734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.751919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.751949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.752139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.752166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.752274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.752300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.752441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.752484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.752605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.752634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.752839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.752868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.753014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.753044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.753207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.753233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.753397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.753423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.753581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.753610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.753767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.753797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.753978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.754008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.754182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.754210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.988 qpair failed and we were unable to recover it. 00:33:42.988 [2024-07-26 02:09:24.754374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.988 [2024-07-26 02:09:24.754401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.754541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.754567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.754685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.754728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.754891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.754920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.755086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.755113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.755283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.755309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.755470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.755500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.755632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.755677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.755834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.755864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.756045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.756087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.756221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.756249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.756382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.756408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.756541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.756571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.756722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.756752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.756902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.756932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.757091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.757119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.757282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.757309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.757435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.757465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.757584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.757614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.757771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.757800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.757958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.757985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.758122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.758149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.758289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.758316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.758449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.758480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.758595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.758621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.758783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.758809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.758964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.758993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.759158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.759185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.759335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.759361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.759507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.759536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.759687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.759716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.759890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.759919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.989 qpair failed and we were unable to recover it. 00:33:42.989 [2024-07-26 02:09:24.760066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.989 [2024-07-26 02:09:24.760093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.760203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.760230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.760469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.760520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.760649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.760693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.760802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.760831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.760987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.761014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.761169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.761197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.761363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.761392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.761558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.761588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.761704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.761734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.761857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.761887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.762048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.762082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.762196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.762222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.762385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.762411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.762569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.762599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.762756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.762786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.762936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.762966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.763095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.763122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.763253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.763294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.763450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.763482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.763637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.763666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.763813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.763842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.764001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.764032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.764195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.764235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.764430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.764475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.764621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.764666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.764783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.764828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.764946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.764973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.765107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.765135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.765274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.765301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.765455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.765483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.765647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.765679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.765815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.765842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.765960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.765987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.766142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.766187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.766317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.766362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.990 [2024-07-26 02:09:24.766495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.990 [2024-07-26 02:09:24.766539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.990 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.766647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.766676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.766851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.766879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.767025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.767076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.767263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.767294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.767484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.767514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.767673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.767702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.767856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.767887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.768041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.768077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.768247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.768274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.768416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.768446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.768616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.768647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.768796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.768826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.768982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.769008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.769150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.769177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.769291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.769317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.769510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.769539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.769723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.769778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.769965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.769992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.770102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.770129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.770269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.770296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.770445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.770474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.770633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.770663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.770840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.770869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.771043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.771082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.771216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.771243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.771382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.771408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.771522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.771564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.771714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.771743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.771897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.771926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.772125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.772152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.772314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.772357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.772530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.772559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.772714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.772743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.772897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.772926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.773080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.773111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.773249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.991 [2024-07-26 02:09:24.773276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.991 qpair failed and we were unable to recover it. 00:33:42.991 [2024-07-26 02:09:24.773450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.773476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.773634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.773663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.773812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.773842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.773992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.774022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.774193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.774220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.774412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.774456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.774619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.774650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.774863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.774893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.775034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.775064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.775213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.775238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.775350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.775376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.775578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.775634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.775789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.775818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.775942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.775968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.776102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.776130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.776240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.776266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.776380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.776407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.776550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.776593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.776764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.776794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.776920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.776964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.777126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.777154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.777284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.777310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.777453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.777479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.777635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.777684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.777808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.777838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.778019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.778049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.778195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.778221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.778328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.778356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.778519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.778546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.778745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.778798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.778988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.779015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.779180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.779207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.779363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.779392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.779571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.779600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.779741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.779811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.992 qpair failed and we were unable to recover it. 00:33:42.992 [2024-07-26 02:09:24.779937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.992 [2024-07-26 02:09:24.779967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.780135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.780162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.780275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.780302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.780438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.780482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.780641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.780670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.780813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.780842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.781028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.781057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.781223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.781251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.781417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.781443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.781698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.781750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.781910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.781939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.782099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.782125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.782264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.782291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.782427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.782456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.782627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.782656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.782831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.782860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.782992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.783019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.783197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.783224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.783355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.783384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.783561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.783591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.783767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.783797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.783946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.783977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.784134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.784160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.784297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.784323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.784485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.784514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.784655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.784685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.784825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.784854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.785030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.785067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.785220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.785247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.785398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.785424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.785610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.785643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.785846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.785875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.786014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.786044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.786245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.786272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.786427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.993 [2024-07-26 02:09:24.786456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.993 qpair failed and we were unable to recover it. 00:33:42.993 [2024-07-26 02:09:24.786593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.786619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.786767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.786793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.786953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.786982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.787129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.787156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.787290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.787317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.787484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.787514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.787671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.787697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.787804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.787831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.787988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.788017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.788169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.788196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.788333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.788360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.788548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.788578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.788712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.788739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.788901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.788944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.789092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.789137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.789277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.789304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.789487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.789517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.789678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.789707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.789867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.789893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.790029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.790082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.790239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.790265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.790397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.790424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.790563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.790606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.790735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.790764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.790876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.790906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.791078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.791123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.791231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.791257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.791420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.791447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.791577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.791606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.791747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.791776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.994 qpair failed and we were unable to recover it. 00:33:42.994 [2024-07-26 02:09:24.791913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.994 [2024-07-26 02:09:24.791939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.792101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.792128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.792292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.792321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.792479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.792506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.792683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.792712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.792857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.792891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.793049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.793083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.793232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.793258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.793368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.793395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.793531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.793557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.793709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.793739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.793867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.793897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.794033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.794067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.794205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.794232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.794369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.794396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.794515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.794542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.794706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.794748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.794939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.794965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.795101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.795128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.795272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.795299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.795487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.795513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.795654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.795681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.795862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.795892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.796014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.796044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.796210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.796236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.796349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.796375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.796538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.796581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.796742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.796769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.796910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.796936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.797099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.797126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.797238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.797265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.797399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.797441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.797599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.797626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.797787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.797813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.797967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.797997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.995 qpair failed and we were unable to recover it. 00:33:42.995 [2024-07-26 02:09:24.798182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.995 [2024-07-26 02:09:24.798209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.798372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.798398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.798552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.798582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.798708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.798738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.798897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.798924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.799101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.799132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.799255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.799285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.799445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.799472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.799653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.799682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.799837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.799866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.800027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.800057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.800230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.800274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.800451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.800481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.800636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.800662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.800845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.800875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.801021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.801050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.801188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.801215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.801318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.801345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.801480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.801511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.801650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.801677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.801814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.801841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.802024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.802054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.802249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.802276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.802414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.802441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.802664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.802694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.802888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.802915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.803087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.803117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.803302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.803331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.803516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.803542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.803710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.803739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.803858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.803888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.804041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.804074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.804242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.804287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.804433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.804463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.804594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.804620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.804789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.996 [2024-07-26 02:09:24.804815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.996 qpair failed and we were unable to recover it. 00:33:42.996 [2024-07-26 02:09:24.804978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.805008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.805182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.805209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.805392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.805421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.805580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.805607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.805758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.805785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.805943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.805972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.806114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.806145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.806309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.806335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.806496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.806523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.806652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.806682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.806842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.806869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.807012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.807056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.807218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.807247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.807405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.807431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.807596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.807630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.807797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.807824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.807999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.808025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.808187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.808214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.808371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.808400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.808533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.808560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.808701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.808728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.808872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.808902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.809030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.809065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.809207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.809234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.809356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.809382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.809582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.809610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.809714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.809759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.809930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.809956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.810115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.810142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.810328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.810357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.810481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.810511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.810637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.810664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.810800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.810827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.810993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.811020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.811150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.811176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.811358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.811388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.997 [2024-07-26 02:09:24.811511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.997 [2024-07-26 02:09:24.811540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.997 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.811682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.811709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.811867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.811894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.812031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.812066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.812227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.812254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.812399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.812444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.812554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.812583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.812751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.812777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.812935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.812962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.813126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.813155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.813304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.813331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.813476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.813519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.813662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.813692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.813839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.813865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.814008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.814035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.814211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.814238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.814344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.814371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.814501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.814528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.814653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.814687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.814818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.814846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.815011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.815054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.815218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.815245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.815349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.815376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.815536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.815563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.815725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.815755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.815938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.815964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.816128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.816155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.816296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.816340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.816494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.816521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.816706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.816735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.816890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.816917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.817050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.998 [2024-07-26 02:09:24.817100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.998 qpair failed and we were unable to recover it. 00:33:42.998 [2024-07-26 02:09:24.817244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.817288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.817442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.817472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.817657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.817684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.817805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.817850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.818026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.818055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.818191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.818218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.818355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.818381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.818524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.818552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.818681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.818708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.818874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.818904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.819044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.819081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.819241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.819267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.819379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.819406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.819578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.819608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.819731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.819758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.819917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.819958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.820133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.820164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.820294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.820321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.820428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.820455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.820598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.820625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.820752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.820779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.820943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.820972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.821124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.821151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.821303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.821329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.821468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.821495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.821628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.821655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.821769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.821800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.821930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.821974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.822150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.822180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.822336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.822363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:42.999 qpair failed and we were unable to recover it. 00:33:42.999 [2024-07-26 02:09:24.822497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.999 [2024-07-26 02:09:24.822524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.822659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.822686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.822835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.822862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.823023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.823053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.823192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.823222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.823376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.823402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.823581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.823611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.823760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.823789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.823946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.823973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.824113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.824156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.824323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.824349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.824482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.824509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.824691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.824720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.824865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.824894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.825056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.825089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.825226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.825252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.825393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.825420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.825525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.825552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.825715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.825742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.825936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.825965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.826134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.826161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.826294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.826320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.826490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.826519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.826685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.826712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.826848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.826874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.827082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.827111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.827263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.827290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.827451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.827493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.827611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.827640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.827767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.827793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.827953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.827979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.828146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.828176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.828359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.828386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.828534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.828563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.828738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.828768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.828933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.828960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.829103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.000 [2024-07-26 02:09:24.829130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.000 qpair failed and we were unable to recover it. 00:33:43.000 [2024-07-26 02:09:24.829264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.829291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.829396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.829422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.829578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.829605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.829759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.829787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.829949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.829976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.830088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.830115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.830256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.830282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.830419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.830446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.830581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.830608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.830772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.830801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.830983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.831009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.831150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.831177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.831314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.831355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.831514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.831541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.831681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.831708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.831845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.831872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.832083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.832113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.832241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.832268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.832409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.832436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.832618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.832645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.832799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.832829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.832953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.832982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.833171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.833198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.833334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.833361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.833475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.833502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.833607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.833633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.833796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.833826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.833964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.833993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.834188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.834215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.834366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.834396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.834563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.834589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.834729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.834756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.834939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.834969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.835125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.835152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.835293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.835320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.835425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.001 [2024-07-26 02:09:24.835452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.001 qpair failed and we were unable to recover it. 00:33:43.001 [2024-07-26 02:09:24.835612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.835639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.835839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.835866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.836044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.836103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.836240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.836267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.836384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.836411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.836542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.836569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.836749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.836778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.836932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.836959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.837097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.837141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.837318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.837348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.837503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.837529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.837693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.837737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.837896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.837925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.838053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.838086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.838227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.838254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.838412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.838438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.838547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.838574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.838742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.838769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.838958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.838987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.839144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.839172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.839308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.839351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.839509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.839538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.839698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.839725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.839885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.839915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.840064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.840094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.840278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.840305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.840457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.840486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.840630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.840659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.840793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.840819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.840979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.841021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.841188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.841223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.841381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.841407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.841543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.841586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.841741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.841768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.841903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.002 [2024-07-26 02:09:24.841930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.002 qpair failed and we were unable to recover it. 00:33:43.002 [2024-07-26 02:09:24.842093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.842123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.842248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.842278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.842462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.842488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.842594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.842621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.842790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.842817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.843004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.843033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.843179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.843206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.843340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.843367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.843497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.843523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.843669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.843696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.843834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.843861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.844003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.844030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.844179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.844206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.844364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.844394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.844551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.844578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.844689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.844716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.844855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.844881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.845083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.845111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.845250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.845278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.845441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.845485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.845644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.845672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.845851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.845881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.846037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.846072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.846254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.846280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.846422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.846448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.846590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.846617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.846759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.846786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.846940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.846970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.847146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.847177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.847335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.847362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.003 [2024-07-26 02:09:24.847473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.003 [2024-07-26 02:09:24.847516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.003 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.847670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.847699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.847831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.847876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.848118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.848145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.848281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.848308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.848445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.848475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.848586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.848614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.848784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.848810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.848961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.848988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.849130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.849158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.849293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.849323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.849505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.849532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.849684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.849713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.849831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.849877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.850042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.850087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.850254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.850283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.850426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.850453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.850593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.850620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.850754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.850780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.850929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.850959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.851190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.851217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.851362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.851388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.851526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.851553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.851691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.851717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.851895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.851924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.852033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.852069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.852201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.852227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.852359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.852401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.852547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.852577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.852717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.852743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.852958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.852985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.853142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.853172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.853338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.853365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.853505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.853532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.853679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.853709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.853840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.853866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.004 [2024-07-26 02:09:24.854027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.004 [2024-07-26 02:09:24.854163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.004 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.854389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.854418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.854586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.854613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.854747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.854774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.854938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.854982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.855118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.855145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.855304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.855331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.855467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.855493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.855660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.855687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.855794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.855825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.856012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.856042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.856205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.856232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.856390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.856435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.856612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.856641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.856797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.856824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.856990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.857034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.857266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.857295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.857480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.857506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.857646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.857672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.857885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.857926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.858090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.858117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.858228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.858255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.858418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.858444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.858625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.858651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.858764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.858791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.858930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.858957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.859097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.859124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.859292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.859334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.859482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.859511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.859692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.859719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.859834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.859878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.860021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.860050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.860222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.860248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.860353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.860379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.860544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.860570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.860786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.860812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.005 [2024-07-26 02:09:24.860992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.005 [2024-07-26 02:09:24.861021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.005 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.861202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.861232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.861394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.861421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.861550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.861576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.861753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.861781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.861895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.861922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.862065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.862092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.862277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.862307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.862479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.862505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.862640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.862666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.862773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.862800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.862963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.862990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.863177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.863204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.863436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.863470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.863687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.863713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.863868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.863897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.864049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.864098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.864257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.864283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.864426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.864453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.864595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.864621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.864752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.864778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.864922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.864949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.865083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.865126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.865251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.865278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.865440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.865482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.865627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.865656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.865841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.865867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.866031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.866067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.866194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.866224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.866408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.866434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.866619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.866648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.866871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.866900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.867034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.867067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.867202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.867228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.867393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.867423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.867558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.867588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.006 qpair failed and we were unable to recover it. 00:33:43.006 [2024-07-26 02:09:24.867698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.006 [2024-07-26 02:09:24.867724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.867864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.867891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.868017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.868044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.868213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.868243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.868401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.868430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.868556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.868584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.868755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.868781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.868947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.868976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.869132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.869159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.869378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.869408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.869591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.869620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.869794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.869821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.869997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.870026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.870160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.870187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.870321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.870348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.870535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.870564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.870756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.870782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.870895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.870926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.871066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.871093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.871228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.871258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.871414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.871441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.871577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.871604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.871741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.871768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.871934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.871960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.872119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.872150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.872336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.872363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.872513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.872539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.872677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.872720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.872902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.872933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.873093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.873120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.873253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.873296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.873435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.873463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.873603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.873629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.873799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.873825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.873958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.873987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.874145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.874172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.007 qpair failed and we were unable to recover it. 00:33:43.007 [2024-07-26 02:09:24.874283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.007 [2024-07-26 02:09:24.874310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.874449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.874476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.874626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.874653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.874770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.874797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.874908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.874935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.875132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.875159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.875295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.875322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.875460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.875489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.875655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.875681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.875786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.875812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.875975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.876002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.876206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.876232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.876418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.876447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.876609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.876636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.876740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.876767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.876905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.876931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.877093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.877120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.877253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.877280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.877414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.877440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.877561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.877587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.877730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.877756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.877912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.877947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.878096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.878127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.878312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.878339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.878448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.878492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.878636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.878666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.878851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.878878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.879069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.879099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.879213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.879243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.879401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.008 [2024-07-26 02:09:24.879427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.008 qpair failed and we were unable to recover it. 00:33:43.008 [2024-07-26 02:09:24.879604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.879633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.879750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.879779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.879958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.879985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.880090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.880134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.880293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.880323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.880467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.880494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.880600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.880627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.880756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.880782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.880897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.880924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.881025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.881051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.881282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.881312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.881475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.881502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.881639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.881665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.881878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.881904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.882069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.882113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.882328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.882372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.882513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.882539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.882713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.882740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.882907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.882936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.883101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.883128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.883264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.883291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.883469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.883499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.883693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.883719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.883879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.883905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.884051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.884087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.884275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.884302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.884439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.884465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.884630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.884656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.884845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.884871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.885033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.885066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.885201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.885228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.885368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.885414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.885571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.885598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.885703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.009 [2024-07-26 02:09:24.885730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.009 qpair failed and we were unable to recover it. 00:33:43.009 [2024-07-26 02:09:24.885887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.885916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.886077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.886105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.886236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.886263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.886432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.886459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.886673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.886700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.886916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.886945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.887134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.887161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.887275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.887302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.887522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.887552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.887703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.887732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.887860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.887886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.888034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.888067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.888209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.888252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.888440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.888466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.888644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.888673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.888848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.888877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.889035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.889066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.889208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.889234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.889344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.889371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.889537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.889564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.889700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.889727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.889828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.889855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.889982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.890008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.890146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.890173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.890328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.890387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.890563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.890592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.890772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.890803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.890952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.890981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.891165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.891193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.891302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.891329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.891502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.891533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.891687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.891713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.891853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.891897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.892079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.892106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.892241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.892268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.892447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.892477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.010 [2024-07-26 02:09:24.892692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.010 [2024-07-26 02:09:24.892718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.010 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.892885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.892915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.893049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.893084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.893222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.893249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.893385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.893412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.893523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.893549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.893697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.893724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.893828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.893856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.893995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.894023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.894245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.894290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.894481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.894509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.894692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.894721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.894916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.894943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.895082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.895110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.895274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.895302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.895578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.895632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.895793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.895821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.896001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.896032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.896193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.896225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.896451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.896478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.896637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.896666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.896840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.896900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.897050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.897098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.897213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.897239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.897410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.897437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.897572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.897599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.897761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.897787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.897981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.898013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.898187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.898215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.898353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.898397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.898645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.898699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.898890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.898917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.899032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.899064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.899175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.899202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.899316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.899344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.899527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.899557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.899828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.011 [2024-07-26 02:09:24.899882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.011 qpair failed and we were unable to recover it. 00:33:43.011 [2024-07-26 02:09:24.900010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.900039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.900211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.900256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.900405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.900435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.900569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.900597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.900734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.900785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.900938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.900968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.901162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.901191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.901341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.901371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.901569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.901625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.901810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.901837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.901947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.901992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.902145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.902176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.902338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.902365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.902506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.902533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.902638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.902664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.902824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.902850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.903071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.903100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.903282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.903311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.903444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.903471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.903606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.903633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.903824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.903853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.903981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.904009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.904150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.904177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.904320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.904349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.904489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.904516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.904624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.904651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.904792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.904820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.905000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.905028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.905174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.905202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2421577 Killed "${NVMF_APP[@]}" "$@" 00:33:43.012 [2024-07-26 02:09:24.905356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.905383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 [2024-07-26 02:09:24.905536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.905563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 02:09:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:43.012 [2024-07-26 02:09:24.905730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.905758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 02:09:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:43.012 [2024-07-26 02:09:24.905922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.905953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 02:09:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:43.012 [2024-07-26 02:09:24.906116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.012 [2024-07-26 02:09:24.906143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.012 qpair failed and we were unable to recover it. 00:33:43.012 02:09:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:43.013 [2024-07-26 02:09:24.906277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 02:09:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.013 [2024-07-26 02:09:24.906321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.906572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.906623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.906794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.906821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.906979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.907008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.907169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.907200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.907361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.907387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.907521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.907547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.907703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.907733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.907901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.907928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.908070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.908098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.908257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.908287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.908425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.908452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.908611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.908638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.908802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.908831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.908962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.908989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.909123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.909150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.909318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.909363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.909508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.909536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.909681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.909708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 [2024-07-26 02:09:24.909883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.013 [2024-07-26 02:09:24.909914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.013 02:09:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2422122 00:33:43.013 qpair failed and we were unable to recover it. 00:33:43.013 02:09:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:43.014 [2024-07-26 02:09:24.910054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.910090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 02:09:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2422122 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.910242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.910286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 02:09:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2422122 ']' 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.910436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 02:09:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.014 [2024-07-26 02:09:24.910467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 02:09:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:43.014 [2024-07-26 02:09:24.910593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.910621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 02:09:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.014 [2024-07-26 02:09:24.910733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.910762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 02:09:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:43.014 [2024-07-26 02:09:24.910957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 02:09:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.014 [2024-07-26 02:09:24.910985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.911120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.911148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.911288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.911336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.911538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.911586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.911713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.911740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.911887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.911915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.912052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.912084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.912241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.912268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.912407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.912450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.912593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.912641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.912779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.912821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.912947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.912976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.913150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.913180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.913344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.913371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.913508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.913535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.913766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.913796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.913954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.913981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.914099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.914128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.914239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.914267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.914432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.914459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.914569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.914597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.914768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.914795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.914934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.914961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.915119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.915149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.915310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.915337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.915471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.915498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.915640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.915684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.915843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.014 [2024-07-26 02:09:24.915870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.014 qpair failed and we were unable to recover it. 00:33:43.014 [2024-07-26 02:09:24.916034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.916082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.916243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.916273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.916430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.916462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.916626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.916658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.916793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.916837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.916989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.917018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.917203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.917231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.917377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.917406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.917524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.917553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.917689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.917716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.917852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.917880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.918040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.918079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.918238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.918264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.918415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.918444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.918597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.918627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.918785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.918812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.918928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.918955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.919128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.919158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.919298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.919325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.919462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.919505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.919630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.919661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.919826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.919853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.919970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.919997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.920139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.920180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.920369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.920398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.920538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.920566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.920705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.920733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.920896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.920924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.921025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.921064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.921204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.921231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.921420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.921447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.921606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.921636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.921806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.921837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.921991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.922018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.922167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.922194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.922309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.922336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.922476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.922502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.922687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.922717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.922892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.922921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.015 [2024-07-26 02:09:24.923074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.015 [2024-07-26 02:09:24.923101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.015 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.923218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.923245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.923397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.923426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.923569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.923596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.923730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.923760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.923963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.923995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.924138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.924166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.924281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.924308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.924442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.924469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.924610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.924638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.924766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.924811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.925008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.925035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.925204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.925231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.925383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.925412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.925584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.925613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.925775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.925802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.925942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.925968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.926130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.926159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.926301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.926327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.926431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.926457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.926639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.926667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.926799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.926825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.926992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.927019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.927185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.927213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.927373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.927400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.927566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.927593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.927757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.927784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.927924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.927950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.928054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.928086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.928256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.928284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.928463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.928489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.928604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.928631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.928738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.928764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.928875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.928901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.929014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.929041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.929259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.929287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.929396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.929422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.929533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.929559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.929725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.929751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.929932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.929960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.930102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.930130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.930268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.930295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.016 qpair failed and we were unable to recover it. 00:33:43.016 [2024-07-26 02:09:24.930435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.016 [2024-07-26 02:09:24.930461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.930569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.930596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.930705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.930736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.930850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.930877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.931005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.931032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.931183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.931211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.931309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.931335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.931444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.931470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.931574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.931601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.931741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.931767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.931911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.931937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.932048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.932082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.932218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.932248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.932364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.932391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.932509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.932535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.932678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.932706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.932872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.932899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.933005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.933031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.933152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.933179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.933318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.933344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.933450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.933477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.933667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.933693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.933820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.933846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.933957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.933983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.934116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.934143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.934251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.934277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.934448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.934474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.934612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.934638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.934771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.934797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.934908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.934935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.935044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.935081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.935193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.935220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.935367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.935393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.935524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.935550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.935687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.935713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.935853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.935879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.936024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.936050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.936179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.936218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.936364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.936394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.936503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.936530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.936637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.936664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.017 [2024-07-26 02:09:24.936773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.017 [2024-07-26 02:09:24.936801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.017 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.936936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.936967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.937079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.937106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.937223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.937250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.937356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.937382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.937490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.937516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.937660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.937687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.937798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.937824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.937966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.937992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.938110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.938138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.938256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.938283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.938419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.938445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.938581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.938607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.938750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.938787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.938934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.938962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.939087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.939114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.939231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.939258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.939401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.939426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.939542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.939567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.939676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.939702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.939807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.939832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.939945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.939969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.940102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.940129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.940278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.940316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.940493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.940521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.940647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.940675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.940844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.940872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.940994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.941033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.941163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.941196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.941316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.941343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.941449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.941475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.941607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.941646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.941789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.941817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.941931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.941957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.942097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.942125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.942237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.942263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.018 qpair failed and we were unable to recover it. 00:33:43.018 [2024-07-26 02:09:24.942369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.018 [2024-07-26 02:09:24.942395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.304 qpair failed and we were unable to recover it. 00:33:43.304 [2024-07-26 02:09:24.942538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.304 [2024-07-26 02:09:24.942564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.304 qpair failed and we were unable to recover it. 00:33:43.304 [2024-07-26 02:09:24.942677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.304 [2024-07-26 02:09:24.942703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.304 qpair failed and we were unable to recover it. 00:33:43.304 [2024-07-26 02:09:24.942818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.304 [2024-07-26 02:09:24.942845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.304 qpair failed and we were unable to recover it. 00:33:43.304 [2024-07-26 02:09:24.942953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.304 [2024-07-26 02:09:24.942979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.304 qpair failed and we were unable to recover it. 00:33:43.304 [2024-07-26 02:09:24.943134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.943172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.943294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.943320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.943435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.943460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.943566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.943591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.943704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.943730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.943840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.943865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.943977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.944004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.944134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.944162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.944319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.944345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.944479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.944505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.944621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.944647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.944761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.944787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.944902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.944928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.945064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.945091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.945230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.945268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.945414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.945442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.945578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.945604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.945716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.945741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.945861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.945887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.946007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.946047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.946209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.946236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.946375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.946401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.946510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.946537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.946644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.946671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.946814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.946840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.946970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.946997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.947116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.947143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.947273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.947299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.947473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.947499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.947612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.947638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.947775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.947801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.947922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.947951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.305 [2024-07-26 02:09:24.948097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.305 [2024-07-26 02:09:24.948124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.305 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.948261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.948285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.948403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.948429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.948543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.948568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.948671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.948696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.948833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.948860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.949015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.949054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.949214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.949243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.949400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.949427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.949566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.949592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.949723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.949749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.949861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.949889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.950010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.950049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.950216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.950245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.950389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.950415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.950522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.950548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.950675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.950701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.950841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.950867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.951002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.951030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.951160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.951187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.951303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.951328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.951465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.951491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.951622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.951650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.951761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.951786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.951918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.951943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.952093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.952119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.952257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.952285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.952434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.952460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.952626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.952652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.952755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.952781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.952885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.952911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.953020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.953047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.953172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.953199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.953318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.306 [2024-07-26 02:09:24.953347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.306 qpair failed and we were unable to recover it. 00:33:43.306 [2024-07-26 02:09:24.953482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.953508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.953645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.953671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.953821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.953847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.953996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.954035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.954163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.954191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.954308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.954335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.954443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.954468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.954608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.954633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.954763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.954788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.954925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.954950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.955066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.955106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.955227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.955252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.955363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.955388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.955490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.955515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.955623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.955647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.955785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.955814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.955949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.955988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.956115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.956143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.956285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.956313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.956457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.956485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.956620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.956646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.956759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.956786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.956926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.956953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.957096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.957122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.957283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.957308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.957459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.957485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.957593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.957618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.957759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.957784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.957919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.957945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.958121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.958148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.958256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.958282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.958415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.958441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.307 [2024-07-26 02:09:24.958560] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:33:43.307 [2024-07-26 02:09:24.958581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.307 [2024-07-26 02:09:24.958608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.307 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.958622] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.308 [2024-07-26 02:09:24.958740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.958765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.958904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.958929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.959052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.959098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.959246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.959275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.959414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.959442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.959580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.959606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.959736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.959763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.959895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.959923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.960075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.960102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.960232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.960259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.960394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.960420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.960525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.960551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.960728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.960753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.960893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.960919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.961051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.961083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.961189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.961215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.961353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.961379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.961516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.961541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.961679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.961705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.961842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.961868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.961976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.962004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.962179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.962223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.962361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.962388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.962525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.962551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.962679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.962705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.962843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.962869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.963007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.963033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.963194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.963233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.963355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.963383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.963518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.963544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.963658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.963683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.963816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.308 [2024-07-26 02:09:24.963841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.308 qpair failed and we were unable to recover it. 00:33:43.308 [2024-07-26 02:09:24.963955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.963981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.964117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.964144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.964265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.964304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.964426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.964455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.964568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.964596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.964761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.964787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.964898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.964924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.965086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.965112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.965246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.965272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.965380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.965405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.965535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.965561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.965701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.965727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.965890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.965915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.966041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.966086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.966230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.966258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.966390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.966416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.966550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.966576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.966706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.966731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.966866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.966891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.967003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.967028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.967146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.967173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.967312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.967337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.967444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.967469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.967571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.967596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.967760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.967786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.967913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.967939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.968079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.968105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.968209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.968234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.968376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.968402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.968540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.968566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.968668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.968694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.968805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.968831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.968981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.309 [2024-07-26 02:09:24.969019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.309 qpair failed and we were unable to recover it. 00:33:43.309 [2024-07-26 02:09:24.969171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.969199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.969319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.969347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.969482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.969508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.969644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.969670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.969785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.969812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.969919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.969946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.970102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.970128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.970240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.970266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.970399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.970425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.970557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.970582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.970739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.970772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.970904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.970932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.971048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.971082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.971220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.971246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.971374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.971400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.971545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.971571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.971706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.971732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.971847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.971875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.971977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.972003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.972133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.972171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.972304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.972332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.972465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.972491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.972633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.972659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.972797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.972824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.972992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.973019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.973164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.973191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.973332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.973359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.310 [2024-07-26 02:09:24.973491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.310 [2024-07-26 02:09:24.973517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.310 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.973627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.973653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.973792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.973818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.973971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.974010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.974143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.974181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.974301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.974329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.974473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.974499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.974607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.974632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.974737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.974762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.974872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.974898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.975051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.975086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.975191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.975216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.975353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.975380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.975484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.975510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.975645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.975671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.975792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.975820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.975957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.975983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.976145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.976171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.976283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.976310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.976480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.976506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.976614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.976640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.976749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.976776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.976954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.976993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.977126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.977168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.977305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.977333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.977440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.977465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.977577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.977603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.977735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.977760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.977899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.977924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.978036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.978071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.978213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.978241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.978348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.978375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.978512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.978538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.311 [2024-07-26 02:09:24.978685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.311 [2024-07-26 02:09:24.978712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.311 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.978843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.978870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.978979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.979006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.979170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.979208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.979333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.979362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.979525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.979552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.979665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.979690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.979794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.979819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.979989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.980014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.980156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.980183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.980300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.980339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.980460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.980487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.980601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.980627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.980761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.980787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.980929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.980954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.981074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.981100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.981207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.981233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.981359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.981399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.981567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.981595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.981714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.981742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.981856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.981882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.982047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.982080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.982178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.982204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.982317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.982343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.982462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.982487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.982593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.982619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.982720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.982747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.982881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.982907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.983042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.983073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.983184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.983211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.983319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.983351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.983487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.983514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.983648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.312 [2024-07-26 02:09:24.983674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.312 qpair failed and we were unable to recover it. 00:33:43.312 [2024-07-26 02:09:24.983779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.983804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.983913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.983938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.984056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.984111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.984232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.984259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.984369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.984395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.984537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.984562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.984704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.984730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.984835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.984860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.984971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.984997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.985128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.985154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.985288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.985313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.985458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.985485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.985605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.985631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.985764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.985790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.985930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.985957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.986103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.986129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.986248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.986274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.986409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.986435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.986545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.986572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.986678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.986704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.986814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.986840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.986953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.986980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.987116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.987141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.987246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.987272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.987377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.987408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.987541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.987567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.987686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.987714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.987877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.987903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.988036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.988068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.988208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.988234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.988394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.988420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.988529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.988556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.988694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.988720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.988871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.313 [2024-07-26 02:09:24.988910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.313 qpair failed and we were unable to recover it. 00:33:43.313 [2024-07-26 02:09:24.989036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.989086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.989219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.989248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.989409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.989435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.989547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.989574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.989721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.989749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.989854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.989881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.990008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.990047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.990201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.990228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.990370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.990396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.990529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.990555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.990696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.990722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.990860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.990887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.991023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.991050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.991158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.991184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.991290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.991316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.991454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.991480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.991586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.991611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.991723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.991754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.991869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.991896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.992040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.992089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.992238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.992267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.992401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.992427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.992534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.992559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.992672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.992699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.992803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.992830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.992960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.992998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.993142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.993182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.993353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.993381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.993514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.993540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.993643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.993669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.993776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.993804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.993959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.993998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.994147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.314 [2024-07-26 02:09:24.994175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.314 qpair failed and we were unable to recover it. 00:33:43.314 [2024-07-26 02:09:24.994335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.994361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.994471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.994497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.994624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.994653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.994757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.994784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.994902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.994941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.995051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.995084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.995196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.995225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.995331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.995357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.995499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.995525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.995639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.995666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.995802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.995829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.995956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.995994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.996115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.996142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.996252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.996278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.996437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.996462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.996594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.996619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.996725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.996753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.996893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.996920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.997070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.997110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.997223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.997251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.997426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.997453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.315 [2024-07-26 02:09:24.997586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.997612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.997736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.997763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.997887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.997914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.998050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.998088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.998223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.998250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.998363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.998389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.315 [2024-07-26 02:09:24.998518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.315 [2024-07-26 02:09:24.998543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.315 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:24.998659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:24.998686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:24.998794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:24.998819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:24.998955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:24.998981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:24.999120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:24.999147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:24.999252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:24.999278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:24.999420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:24.999446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:24.999552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:24.999577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:24.999681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:24.999706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:24.999840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:24.999865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:24.999973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:24.999998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.000120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.000146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.000246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.000278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.000387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.000413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.000541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.000567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.000705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.000731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.000873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.000898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.001026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.001051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.001157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.001183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.001289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.001315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.001433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.001460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.001593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.001619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.001725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.001751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.001860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.001886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.001995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.002025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.002171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.002198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.002311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.002337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.002476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.002501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.002614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.002639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.002770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.002817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.002946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.002974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.003141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.003168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.003306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.003333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.003441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.003468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.316 [2024-07-26 02:09:25.003577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.316 [2024-07-26 02:09:25.003605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.316 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.003719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.003745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.003881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.003907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.004072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.004099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.004216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.004244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.004350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.004376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.004486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.004511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.004649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.004675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.004812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.004838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.005004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.005030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.005153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.005181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.005360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.005400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.005520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.005548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.005687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.005713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.005874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.005900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.006018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.006057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.006183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.006210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.006338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.006374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.006482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.006509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.006668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.006693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.006830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.006858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.007007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.007046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.007210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.007249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.007364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.007392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.007528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.007555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.007676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.007704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.007820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.007847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.007964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.008003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.008151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.008191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.008364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.008393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.008518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.008544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.008716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.008743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.008864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.008891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.009003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.317 [2024-07-26 02:09:25.009030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.317 qpair failed and we were unable to recover it. 00:33:43.317 [2024-07-26 02:09:25.009155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.009195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.009340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.009367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.009478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.009504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.009637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.009663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.009797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.009823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.009963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.009989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.010107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.010133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.010239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.010264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.010437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.010476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.010594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.010621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.010735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.010761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.010868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.010894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.010997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.011023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.011182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.011222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.011392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.011419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.011533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.011561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.011730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.011756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.011895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.011921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.012064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.012091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.012221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.012247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.012384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.012412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.012518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.012545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.012672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.012699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.012841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.012871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.013004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.013030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.013157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.013185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.013319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.013344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.013486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.013512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.013619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.013645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.013756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.013783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.013895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.013922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.014096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.014124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.014263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.014289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.014405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.318 [2024-07-26 02:09:25.014431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.318 qpair failed and we were unable to recover it. 00:33:43.318 [2024-07-26 02:09:25.014596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.014622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.014726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.014753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.014872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.014899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.015013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.015038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.015157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.015183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.015288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.015313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.015434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.015460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.015593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.015621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.015766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.015805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.015961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.015989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.016115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.016142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.016249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.016277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.016430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.016457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.016567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.016595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.016729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.016758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.016865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.016891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.017036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.017076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.017213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.017239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.017372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.017398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.017538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.017565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.017727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.017752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.017891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.017917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.018019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.018045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.018195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.018220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.018354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.018380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.018494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.018520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.018685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.018710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.018840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.018866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.018975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.019000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.019159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.019199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.019332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.019370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.019513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.019541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.019679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.019705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.319 [2024-07-26 02:09:25.019835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.319 [2024-07-26 02:09:25.019860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.319 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.019980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.020019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.020203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.020230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.020373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.020399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.020536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.020561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.020692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.020717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.020855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.020882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.020986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.021011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.021145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.021184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.021329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.021361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.021502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.021534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.021698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.021725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.021846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.021873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.022014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.022043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.022198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.022224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.022362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.022387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.022499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.022524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.022655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.022681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.022790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.022815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.022952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.022977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.023099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.023127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.023289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.023315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.023453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.023479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.023619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.023645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.023810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.023837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.023970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.023995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.024115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.024141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.024274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.024299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.320 [2024-07-26 02:09:25.024441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.320 [2024-07-26 02:09:25.024466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.320 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.024606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.024633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.024771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.024796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.024929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.024955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.025064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.025091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.025221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.025247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.025369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.025408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.025514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.025542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.025660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.025697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.025807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.025837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.025978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.026005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.026132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.026159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.026294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.026321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.026454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.026479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.026618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.026644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.026754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.026780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.026894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.026920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.027056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.027086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.027197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.027223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.027351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.027381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.027503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.027530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.027657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.027685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.027850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.027875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.028009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.028035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.028162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.028192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.028323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.028350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.028491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.028518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.028682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.028708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.028840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.028866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.028972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.028998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.029117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.029143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.029271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.029297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.029441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.029467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.029602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.029628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.321 qpair failed and we were unable to recover it. 00:33:43.321 [2024-07-26 02:09:25.029745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.321 [2024-07-26 02:09:25.029771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.029881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.029909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.030049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.030091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.030208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.030234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.030396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.030421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.030554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.030579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.030656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:43.322 [2024-07-26 02:09:25.030689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.030714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.030832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.030857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.030969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.030995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.031118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.031157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.031272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.031300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.031445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.031472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.031583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.031611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.031758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.031785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.031895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.031922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.032045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.032087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.032198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.032224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.032358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.032384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.032503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.032528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.032702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.032727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.032830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.032856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.033010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.033049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.033186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.033215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.033320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.033357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.033494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.033520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.033689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.033715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.033827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.033854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.034019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.034046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.034166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.034194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.034387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.034426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.034541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.034569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.034712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.034739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.034876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.034902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.035109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.322 [2024-07-26 02:09:25.035148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.322 qpair failed and we were unable to recover it. 00:33:43.322 [2024-07-26 02:09:25.035268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.035295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.035460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.035487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.035623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.035649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.035763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.035789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.035901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.035928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.036046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.036095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.036243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.036271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.036417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.036446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.036610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.036642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.036780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.036807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.036941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.036968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.037142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.037181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.037324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.037351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.037474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.037500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.037629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.037656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.037795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.037820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.037955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.037981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.038143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.038172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.038287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.038314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.038430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.038457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.038595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.038622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.038773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.038799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.038917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.038944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.039066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.039093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.039195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.039222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.039357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.039383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.039495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.039520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.039632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.039658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.039805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.039833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.039967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.039993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.040142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.040169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.040307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.040333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.040471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.040497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.040652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.323 [2024-07-26 02:09:25.040678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.323 qpair failed and we were unable to recover it. 00:33:43.323 [2024-07-26 02:09:25.040814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.040840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.040975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.041006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.041182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.041209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.041354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.041380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.041521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.041547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.041656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.041683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.041821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.041847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.041983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.042010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.042153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.042179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.042295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.042321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.042437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.042464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.042597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.042624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.042756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.042782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.042905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.042945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.043071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.043101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.043253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.043279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.043458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.043487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.043628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.043654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.043764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.043790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.043923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.043950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.044098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.044124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.044262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.044288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.044423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.044449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.044583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.044608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.044716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.044752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.044895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.044923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.045070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.045097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.045239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.045267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.045415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.045447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.045587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.045613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.045766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.045806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.045982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.046010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.046136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.046164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.324 [2024-07-26 02:09:25.046278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.324 [2024-07-26 02:09:25.046305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.324 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.046474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.046501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.046609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.046636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.046776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.046804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.046939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.046966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.047084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.047112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.047226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.047254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.047375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.047403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.047542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.047568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.047696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.047725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.047838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.047864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.048000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.048026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.048172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.048199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.048303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.048329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.048477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.048504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.048643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.048670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.048803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.048829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.048968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.048995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.049142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.049169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.049308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.049335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.049472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.049498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.049660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.049687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.049831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.049858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.049992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.050019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.050195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.050234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.050353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.050380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.050516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.050543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.050650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.050676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.050786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.050812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.050924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.050951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.325 qpair failed and we were unable to recover it. 00:33:43.325 [2024-07-26 02:09:25.051113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.325 [2024-07-26 02:09:25.051140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.051265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.051293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.051392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.051418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.051556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.051582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.051711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.051737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.051866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.051898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.052033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.052066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.052207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.052233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.052341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.052367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.052479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.052504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.052635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.052661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.052772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.052797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.052917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.052943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.053065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.053104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.053237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.053265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.053398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.053424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.053560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.053587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.053695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.053721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.053859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.053885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.054034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.054075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.054188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.054214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.054360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.054399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.054522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.054550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.054712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.054739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.054869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.054895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.055000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.055027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.055149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.055176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.055344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.055371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.055481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.055508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.055613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.055640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.055779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.055806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.055972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.055998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.056112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.056142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.056278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.326 [2024-07-26 02:09:25.056303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.326 qpair failed and we were unable to recover it. 00:33:43.326 [2024-07-26 02:09:25.056414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.056440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.056573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.056600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.056740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.056769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.056928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.056955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.057085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.057113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.057272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.057299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.057439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.057465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.057601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.057627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.057758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.057785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.057941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.057970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.058104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.058132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.058265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.058292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.058410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.058436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.058596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.058622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.058732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.058760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.058908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.058947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.059067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.059116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.059266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.059293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.059397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.059432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.059537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.059563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.059672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.059698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.059802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.059828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.059966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.059992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.060104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.060130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.060293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.060319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.060434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.060466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.060591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.060617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.060752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.060779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.060886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.060912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.061045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.061083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.061220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.061246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.061384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.061410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.061521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.061548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.327 [2024-07-26 02:09:25.061681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.327 [2024-07-26 02:09:25.061707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.327 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.061820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.061846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.061975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.062014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.062166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.062204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.062317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.062345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.062487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.062513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.062626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.062652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.062771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.062798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.062934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.062962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.063086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.063126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.063280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.063308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.063426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.063454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.063573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.063601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.063747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.063774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.063921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.063948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.064057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.064091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.064231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.064258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.064397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.064424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.064591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.064618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.064731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.064758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.064926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.064954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.065084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.065124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.065263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.065290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.065454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.065481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.065618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.065644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.065754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.065780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.065943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.065969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.066080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.066107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.066240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.066267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.066372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.066398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.066504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.066530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.066672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.066698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.066842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.066869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.067010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.067036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.328 [2024-07-26 02:09:25.067154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.328 [2024-07-26 02:09:25.067180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.328 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.067298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.067327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.067500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.067535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.067662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.067696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.067903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.067940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.068142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.068181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.068327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.068360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.068479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.068506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.068621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.068647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.068762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.068788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.068957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.068996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.069154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.069184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.069328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.069367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.069480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.069518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.069654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.069680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.069843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.069878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.069989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.070015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.070195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.070222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.070332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.070368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.070489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.070515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.070654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.070680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.070813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.070841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.070945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.070971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.071121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.071148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.071257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.071284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.071403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.071429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.071567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.071593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.071740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.071766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.071874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.071900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.072036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.072080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.072189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.072216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.072379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.072405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.072574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.072600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.072733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.072760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.329 qpair failed and we were unable to recover it. 00:33:43.329 [2024-07-26 02:09:25.072909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.329 [2024-07-26 02:09:25.072935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.073101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.073128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.073266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.073292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.073455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.073481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.073598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.073625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.073757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.073787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.073947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.073974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.074107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.074134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.074249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.074277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.074415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.074443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.074587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.074613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.074725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.074752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.074925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.074951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.075070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.075097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.075211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.075237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.075383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.075409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.075546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.075572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.075734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.075760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.075896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.075922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.076034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.076076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.076182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.076208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.076340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.076371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.076484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.076510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.076644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.076670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.076779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.076805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.076942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.076969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.077102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.077136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.077318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.077344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.077522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.077548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.077696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.330 [2024-07-26 02:09:25.077722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.330 qpair failed and we were unable to recover it. 00:33:43.330 [2024-07-26 02:09:25.077870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.077895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.078036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.078066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.078174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.078204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.078314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.078340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.078484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.078510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.078619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.078646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.078782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.078808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.078942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.078969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.079078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.079104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.079212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.079238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.079342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.079375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.079534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.079560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.079663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.079689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.079824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.079851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.079990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.080017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.080184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.080224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.080396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.080425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.080562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.080589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.080736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.080762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.080883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.080910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.081045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.081083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.081228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.081256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.081433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.081460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.081598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.081624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.081745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.081771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.081884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.081910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.082013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.082050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.082205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.082231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.082344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.082371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.082541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.082571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.082705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.082732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.082881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.082908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.083043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.083090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.083257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.331 [2024-07-26 02:09:25.083283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.331 qpair failed and we were unable to recover it. 00:33:43.331 [2024-07-26 02:09:25.083421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.083447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.083557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.083583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.083715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.083741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.083879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.083905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.084008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.084034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.084174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.084201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.084313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.084339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.084457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.084483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.084634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.084673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.084846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.084888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.085039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.085081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.085225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.085253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.085429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.085456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.085571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.085599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.085760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.085788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.085931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.085958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.086152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.086179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.086320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.086348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.086466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.086497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.086613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.086641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.086780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.086808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.086944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.086970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.087119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.087153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.087292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.087318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.087433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.087462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.087582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.087609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.087719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.087747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.087888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.087914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.088056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.088089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.088203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.088229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.088364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.088391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.088503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.088530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.088666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.088692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.088826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.332 [2024-07-26 02:09:25.088854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.332 qpair failed and we were unable to recover it. 00:33:43.332 [2024-07-26 02:09:25.088992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.089020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.089176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.089206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.089322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.089350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.089482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.089508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.089624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.089651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.089802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.089829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.089938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.089966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.090102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.090130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.090269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.090296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.090470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.090497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.090661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.090688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.090798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.090826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.090936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.090964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.091093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.091132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.091299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.091327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.091442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.091469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.091613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.091640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.091749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.091777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.091922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.091948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.092090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.092119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.092260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.092286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.092407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.092434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.092544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.092570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.092708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.092733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.092870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.092897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.093038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.093077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.093208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.093234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.093339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.093366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.093506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.093537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.093674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.093701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.093810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.093838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.093976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.094002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.094158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.094198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.333 qpair failed and we were unable to recover it. 00:33:43.333 [2024-07-26 02:09:25.094337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.333 [2024-07-26 02:09:25.094367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.094482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.094508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.094643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.094670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.094802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.094829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.094948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.094973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.095094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.095122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.095258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.095285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.095417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.095456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.095628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.095656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.095805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.095833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.095946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.095972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.096115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.096143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.096280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.096306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.096448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.096474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.096589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.096615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.096725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.096751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.096857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.096883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.096990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.097016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.097134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.097160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.097276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.097303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.097443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.097470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.097648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.097674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.097790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.097821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.097966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.097993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.098114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.098142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.098260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.098288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.098460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.098486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.098594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.098620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.098753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.098779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.098892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.098918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.099036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.099081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.099211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.099238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.099348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.099378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.099489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.099516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.334 qpair failed and we were unable to recover it. 00:33:43.334 [2024-07-26 02:09:25.099634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.334 [2024-07-26 02:09:25.099660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.099798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.099825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.099967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.099994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.100158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.100186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.100324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.100356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.100506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.100532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.100678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.100718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.100863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.100890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.101024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.101051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.101172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.101198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.101329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.101355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.101489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.101516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.101649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.101677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.101837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.101864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.102025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.102051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.102166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.102197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.102333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.102369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.102502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.102529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.102665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.102692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.102802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.102829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.102966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.102993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.103131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.103158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.103321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.103348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.103461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.103488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.103590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.103616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.103730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.103761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.103874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.103901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.104074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.104102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.104214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.104241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.104359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.104385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.104497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.104523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.104630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.104657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.104798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.104824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.104924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.335 [2024-07-26 02:09:25.104950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.335 qpair failed and we were unable to recover it. 00:33:43.335 [2024-07-26 02:09:25.105113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.105139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.105306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.105333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.105450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.105477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.105597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.105622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.105772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.105799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.105936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.105964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.106138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.106167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.106281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.106306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.106449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.106475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.106603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.106629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.106742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.106769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.106907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.106933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.107056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.107089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.107197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.107224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.107333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.107365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.107504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.107530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.107643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.107670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.107802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.107828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.107970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.107998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.108143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.108170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.108278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.108305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.108420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.108451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.108592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.108618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.108753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.108778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.108885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.108911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.109019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.109046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.109162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.109188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.109350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.109376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.109541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.109567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.336 qpair failed and we were unable to recover it. 00:33:43.336 [2024-07-26 02:09:25.109679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.336 [2024-07-26 02:09:25.109705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.109816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.109842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.110003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.110029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.110169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.110195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.110305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.110333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.110492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.110532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.110680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.110708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.110860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.110887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.111023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.111051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.111228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.111255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.111366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.111393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.111566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.111593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.111716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.111756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.111900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.111927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.112050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.112092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.112201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.112228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.112368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.112395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.112498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.112524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.112630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.112659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.112774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.112801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.112916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.112943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.113081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.113108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.113241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.113268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.113393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.113432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.113606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.113635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.113746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.113775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.113887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.113915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.114028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.114056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.114175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.114202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.114304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.114330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.114435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.114461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.114594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.114620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.114737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.114765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.114935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.114963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.337 qpair failed and we were unable to recover it. 00:33:43.337 [2024-07-26 02:09:25.115102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.337 [2024-07-26 02:09:25.115132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.115272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.115299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.115435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.115463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.115597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.115625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.115765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.115792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.115908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.115934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.116075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.116101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.116212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.116238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.116369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.116395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.116538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.116564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.116682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.116708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.116870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.116909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.117070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.117118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.117244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.117273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.117416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.117442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.117575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.117602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.117772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.117799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.117907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.117934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.118069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.118099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.118240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.118267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.118412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.118438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.118547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.118573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.118683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.118712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.118833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.118859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.119007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.119046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.119181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.119215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.119353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.119382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.119547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.119574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.119690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.119718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.119834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.119860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.119969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.119995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.120139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.120166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.120296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.120322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.120435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.338 [2024-07-26 02:09:25.120461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.338 qpair failed and we were unable to recover it. 00:33:43.338 [2024-07-26 02:09:25.120621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.120647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.120768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.120796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.120924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.120964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.121089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.121128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.121245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.121273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.121385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.121413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.121515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.121542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.121653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.121681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.121844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.121871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.121978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.122005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.122135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.122161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.122264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.122291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.122397] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:43.339 [2024-07-26 02:09:25.122429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events[2024-07-26 02:09:25.122427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 at runtime. 00:33:43.339 [2024-07-26 02:09:25.122448] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:43.339 [2024-07-26 02:09:25.122453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.339 [2024-07-26 02:09:25.122460] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.122471] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:43.339 [2024-07-26 02:09:25.122564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.122590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.339 [2024-07-26 02:09:25.122531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.122581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:43.339 [2024-07-26 02:09:25.122606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:43.339 [2024-07-26 02:09:25.122608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:43.339 [2024-07-26 02:09:25.122730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.122756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.122870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.122895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.123004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.123030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.123155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.123182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.123317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.123351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.123462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.123489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.123617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.123643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.123752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.123778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.123920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.123946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.124053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.124086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.124232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.124264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.124410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.124449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.124600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.124628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.124742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.124770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.124878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.124909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.125025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.339 [2024-07-26 02:09:25.125052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.339 qpair failed and we were unable to recover it. 00:33:43.339 [2024-07-26 02:09:25.125175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.125202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.125317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.125344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.125484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.125511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.125621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.125648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.125762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.125790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.125893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.125919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.126034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.126079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.126196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.126223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.126341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.126374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.126489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.126516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.126643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.126669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.126774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.126800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.126915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.126941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.127073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.127100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.127210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.127237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.127348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.127378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.127484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.127518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.127629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.127655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.127757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.127783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.127888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.127917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.128025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.128057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.128199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.128226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.128333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.128360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.128497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.128524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.128670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.128697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.128822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.128850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.128958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.128985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.129109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.129135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.129262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.129288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.129400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.129426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.129554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.129580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.129743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.129769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.129908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.129934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.130046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.340 [2024-07-26 02:09:25.130083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.340 qpair failed and we were unable to recover it. 00:33:43.340 [2024-07-26 02:09:25.130189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.130215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.130320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.130352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.130474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.130500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.130645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.130684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.130801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.130828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.130959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.130987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.131097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.131124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.131270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.131296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.131439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.131465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.131616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.131642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.131750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.131776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.131896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.131934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.132072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.132101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.132215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.132241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.132359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.132386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.132517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.132543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.132646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.132672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.132780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.132808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.132961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.133001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.133146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.133187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.133328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.133356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.133469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.133497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.133599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.133626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.133730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.133757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.133869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.133897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.134015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.134055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.134216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.134244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.134354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.134380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.134544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.134571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.341 [2024-07-26 02:09:25.134715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.341 [2024-07-26 02:09:25.134744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.341 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.134858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.134885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.134996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.135027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.135153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.135179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.135315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.135345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.135460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.135486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.135582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.135608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.135721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.135750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.135889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.135916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.136029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.136065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.136177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.136203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.136317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.136346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.136507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.136533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.136659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.136686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.136785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.136811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.136937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.136963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.137092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.137119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.137218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.137244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.137392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.137430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.137548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.137577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.137715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.137744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.137872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.137898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.138036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.138070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.138205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.138232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.138363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.138389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.138506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.138537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.138707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.138734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.138866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.138893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.138999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.139026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.139151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.139184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.139297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.139324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.342 [2024-07-26 02:09:25.139453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.342 [2024-07-26 02:09:25.139478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.342 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.139617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.139643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.139754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.139779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.139894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.139920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.140079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.140119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.140243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.140272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.140412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.140439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.140576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.140602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.140715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.140742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.140896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.140936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.141066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.141093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.141198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.141223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.141362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.141388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.141489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.141515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.141656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.141681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.141795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.141821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.141934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.141964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.142073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.142101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.142236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.142262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.142381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.142408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.142522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.142549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.142679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.142705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.142842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.142868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.142978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.143003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.143120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.143148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.143264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.143292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.143438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.143465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.143571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.143597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.143746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.143773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.143940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.143966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.144075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.144102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.144222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.144249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.144416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.144442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.343 [2024-07-26 02:09:25.144582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.343 [2024-07-26 02:09:25.144608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.343 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.144717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.144744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.144869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.144896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.145056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.145101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.145223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.145250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.145391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.145422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.145567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.145593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.145703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.145732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.145892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.145931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.146050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.146085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.146199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.146226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.146337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.146369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.146476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.146502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.146648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.146674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.146786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.146811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.146945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.146972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.147101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.147129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.147282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.147309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.147424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.147450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.147565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.147591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.147698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.147724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.147844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.147883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.148023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.148066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.148174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.148200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.148304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.148330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.148495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.148520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.148651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.148677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.148825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.148851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.148956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.148981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.149107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.149134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.149241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.149268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.149418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.149444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.149621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.149650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.344 [2024-07-26 02:09:25.149761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.344 [2024-07-26 02:09:25.149790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.344 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.149907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.149933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.150057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.150101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.150211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.150238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.150343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.150369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.150530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.150556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.150693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.150719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.150875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.150914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.151025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.151067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.151174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.151200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.151307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.151333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.151443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.151469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.151575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.151605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.151738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.151763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.151866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.151892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.152004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.152033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.152154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.152183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.152355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.152382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.152514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.152540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.152649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.152675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.152803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.152842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.152982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.153009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.153126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.153152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.153270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.153296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.153445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.153471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.153605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.153631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.153776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.153803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.153939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.153967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.154121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.154160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.154282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.154310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.154441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.154468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.154575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.154603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.154720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.154747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.154853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.345 [2024-07-26 02:09:25.154878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.345 qpair failed and we were unable to recover it. 00:33:43.345 [2024-07-26 02:09:25.155013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.155039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.155187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.155213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.155314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.155340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.155441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.155467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.155579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.155608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.155723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.155773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.155914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.155941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.156090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.156117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.156226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.156254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.156358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.156384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.156495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.156521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.156656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.156683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.156821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.156848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.156959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.156986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.157159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.157186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.157303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.157330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.157435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.157461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.157600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.157627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.157738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.157765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.157906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.157932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.158042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.158074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.158214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.158240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.158346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.158372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.158486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.158513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.158621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.158648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.158764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.158790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.158930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.158957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.159108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.159135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.346 [2024-07-26 02:09:25.159277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.346 [2024-07-26 02:09:25.159303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.346 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.159414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.159440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.159571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.159597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.159698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.159724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.159836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.159864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.159989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.160028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.160176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.160214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.160355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.160382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.160483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.160508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.160643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.160668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.160774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.160799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.160958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.160998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.161126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.161155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.161261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.161288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.161466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.161492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.161607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.161633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.161764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.161790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.161903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.161942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.162053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.162087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.162191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.162216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.162338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.162364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.162496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.162521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.162627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.162653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.162754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.162779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.162913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.162939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.163046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.163078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.163199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.163225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.163364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.163389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.163528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.163554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.163679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.163704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.163817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.163843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.163989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.164028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.164169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.164208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.164385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.347 [2024-07-26 02:09:25.164412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.347 qpair failed and we were unable to recover it. 00:33:43.347 [2024-07-26 02:09:25.164529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.164556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.164701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.164727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.164870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.164896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.165007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.165034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.165163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.165203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.165389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.165416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.165557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.165583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.165692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.165718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.165851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.165877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.165984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.166010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.166137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.166169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.166283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.166310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.166448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.166474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.166605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.166631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.166738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.166765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.166871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.166897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.167031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.167057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.167195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.167222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.167326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.167354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.167467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.167493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.167631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.167658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.167767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.167794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.167922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.167948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.168080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.168120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.168280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.168319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.168452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.168490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.168612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.168639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.168790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.168816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.168929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.168955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.169069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.169096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.169236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.169265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.169381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.169408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.169525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.169554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.348 qpair failed and we were unable to recover it. 00:33:43.348 [2024-07-26 02:09:25.169666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.348 [2024-07-26 02:09:25.169694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.169837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.169864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.170002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.170028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.170139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.170166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.170287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.170318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.170446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.170473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.170614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.170640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.170823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.170850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.170962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.170989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.171130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.171169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.171292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.171320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.171435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.171463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.171568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.171595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.171732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.171758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.171868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.171894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.172032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.172073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.172184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.172210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.172318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.172350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.172465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.172492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.172623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.172649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.172754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.172781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.172927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.172953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.173065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.173092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.173227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.173253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.173351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.173377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.173514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.173540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.173671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.173700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.173812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.173840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.173969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.174007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.174148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.174176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.174278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.174304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.174418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.174443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.174579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.174604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.349 [2024-07-26 02:09:25.174731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.349 [2024-07-26 02:09:25.174757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.349 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.174870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.174897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.175028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.175055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.175178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.175204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.175323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.175351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.175464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.175491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.175592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.175621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.175734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.175760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.175875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.175902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.176014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.176040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.176195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.176221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.176337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.176367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.176481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.176507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.176609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.176634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.176772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.176798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.176913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.176942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.177095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.177122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.177256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.177282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.177408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.177435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.177544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.177570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.177683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.177711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.177848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.177875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.178027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.178073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.178222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.178249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.178352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.178378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.178599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.178626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.178755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.178781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.178884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.178912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.179027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.179072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.179194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.179221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.179338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.179364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.179473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.179499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.179632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.179658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.179819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.350 [2024-07-26 02:09:25.179847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.350 qpair failed and we were unable to recover it. 00:33:43.350 [2024-07-26 02:09:25.179979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.180018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.180174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.180203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.180310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.180337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.180471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.180498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.180614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.180645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.180756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.180782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.180893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.180919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.181038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.181085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.181195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.181223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.181330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.181357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.181493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.181519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.181657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.181683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.181798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.181825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.181937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.181965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.182103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.182131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.182238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.182264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.182395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.182422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.182555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.182581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.182718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.182744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.182883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.182912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.183028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.183054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.183182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.183221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.183338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.183365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.183521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.183547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.183688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.183714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.183855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.183881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.184014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.184040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.184155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.184180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.184285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.184312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.184428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.184454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.184595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.351 [2024-07-26 02:09:25.184621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.351 qpair failed and we were unable to recover it. 00:33:43.351 [2024-07-26 02:09:25.184746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.184775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.184892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.184918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.185044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.185076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.185242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.185269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.185377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.185403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.185521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.185548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.185684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.185711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.185813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.185840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.185995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.186034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.186185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.186213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.186321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.186347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.186485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.186511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.186639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.186665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.186767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.186797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.186897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.186923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.187050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.187100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.187246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.187275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.187385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.187411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.187519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.187546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.187705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.187745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.187903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.187932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.188046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.188081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.188183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.188209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.188320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.188347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.188487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.188513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.188615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.188641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.188776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.188801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.188934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.188973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.189127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.189155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.189263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.189291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.189430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.189457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.189574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.189600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.189731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.189770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.189880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.189907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.190044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.190077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.190189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.190215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.352 [2024-07-26 02:09:25.190324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.352 [2024-07-26 02:09:25.190350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.352 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.190453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.190479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.190586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.190613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.190723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.190749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.190898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.190924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.191064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.191092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.191206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.191234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.191353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.191379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.191482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.191509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.191623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.191649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.191773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.191799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.191910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.191938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.192090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.192129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.192283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.192322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.192443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.192471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.192614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.192640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.192753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.192779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.192889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.192922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.193066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.193093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.193240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.193266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.193369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.193396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.193494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.193519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.193679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.193705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.193812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.193839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.193971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.194009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.194182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.194221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.194386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.194413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.194562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.194588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.194702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.194728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.194839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.194865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.195011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.195037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.195157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.195184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.195293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.195322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.195427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.195454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.195563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.195590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.195724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.195750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.195862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.195888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.353 qpair failed and we were unable to recover it. 00:33:43.353 [2024-07-26 02:09:25.196019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.353 [2024-07-26 02:09:25.196046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.196153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.196180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.196291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.196318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.196487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.196514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.196629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.196655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.196819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.196858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.197023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.197073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.197204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.197242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.197351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.197378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.197488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.197514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.197623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.197649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.197781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.197807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.197923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.197948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.198077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.198103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.198212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.198238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.198391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.198417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.198531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.198556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.198659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.198688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.198827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.198853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.198985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.199011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.199128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.199155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.199266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.199293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.199397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.199424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.199531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.199557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.199659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.199685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.199783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.199809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.199925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.199951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.200070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.200096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.200223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.200249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.200358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.200384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.200483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.200509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.200612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.200638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.200768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.200794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.200941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.200980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.201105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.201144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.201254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.201281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.201392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.201418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.201561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.354 [2024-07-26 02:09:25.201588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.354 qpair failed and we were unable to recover it. 00:33:43.354 [2024-07-26 02:09:25.201744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.201770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.201911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.201938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.202066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.202106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.202221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.202248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.202349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.202375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.202504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.202530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.202668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.202697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.202807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.202835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.202950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.202976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.203083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.203114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.203220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.203245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.203380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.203406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.203516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.203541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.203689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.203717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.203828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.203856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.204026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.204054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.204172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.204198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.204335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.204361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.204474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.204501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.204609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.204636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.204779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.204808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.204952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.204979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.205112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.205139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.205260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.205287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.205401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.205428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.205541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.205567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.205673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.205700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.205808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.205833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.205957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.205996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.206117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.206145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.206252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.206278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.206376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.206402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.206562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.206589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.206697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.206723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.206860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.206886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.207002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.207028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.207171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.207203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.207311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.207337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.207454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.355 [2024-07-26 02:09:25.207481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.355 qpair failed and we were unable to recover it. 00:33:43.355 [2024-07-26 02:09:25.207584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.207610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.207723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.207749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.207878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.207916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.208064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.208093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.208209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.208236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.208377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.208404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.208542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.208569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.208679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.208706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.208815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.208843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.208955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.208981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.209127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.209166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.209296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.209325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.209472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.209499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.209634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.209660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.209804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.209831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.209940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.209966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.210090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.210118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.210255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.210282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.210391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.210417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.210518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.210543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.210683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.210709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.210866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.210904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.211042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.211084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.211203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.211229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.211339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.211365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.211494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.211520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.211629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.211655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.211789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.211816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.211926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.211953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.212078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.212117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.212232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.212260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.212376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.356 [2024-07-26 02:09:25.212402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.356 qpair failed and we were unable to recover it. 00:33:43.356 [2024-07-26 02:09:25.212504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.212530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.212637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.212663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.212767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.212793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.212902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.212928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.213073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.213101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.213217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.213249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.213389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.213415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.213551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.213577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.213686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.213712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.213820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.213848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.213958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.213985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.214107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.214134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.214249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.214274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.214423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.214449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.214554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.214580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.214689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.214714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.214822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.214850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.214977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.215016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.215147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.215175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.215323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.215351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.215455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.215481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.215592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.215619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.215720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.215747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.215861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.215887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.216002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.216031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.216162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.216189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.216300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.216326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.216448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.216474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.216637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.216664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.357 [2024-07-26 02:09:25.216790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.357 [2024-07-26 02:09:25.216828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.357 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.216958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.216996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.217118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.217146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.217260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.217291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.217393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.217419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.217575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.217602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.217737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.217764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.217880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.217910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.218054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.218090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.218203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.218230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.218338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.218364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.218502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.218528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.218659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.218685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.218821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.218847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.219007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.219033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.219150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.219176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.219279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.219306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.219444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.219472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.219612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.219638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.219748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.219773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.219905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.219931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.220042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.220072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.220185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.220210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.220314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.220339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.220447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.220472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.220580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.220606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.220721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.220748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.220875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.220914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.221025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.221053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.221175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.221202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.221319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.221346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.221452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.221478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.221612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.221637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.221749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.221775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.221885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.221914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.222087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.222115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.222251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.222278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.222411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.358 [2024-07-26 02:09:25.222437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.358 qpair failed and we were unable to recover it. 00:33:43.358 [2024-07-26 02:09:25.222567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.222593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.222714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.222740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.222848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.222875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.223045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.223092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.223212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.223239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.223342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.223367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.223510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.223536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.223643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.223669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.223802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.223827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.223966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.223991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.224101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.224128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.224238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.224266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.224427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.224465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.224614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.224641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.224778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.224804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.224916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.224942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.225078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.225105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.225212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.225237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.225343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.225369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.225509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.225535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.225644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.225669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.225769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.225795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.225896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.225922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.226033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.226071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.226176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.226202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.226317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.226342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.226472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.226498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.226613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.226638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.226744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.226771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.226887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.226913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.227043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.227077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.227187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.227213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.227319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.227344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.227450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.227476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.227579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.227604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.227719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.227745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.227854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.227880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.227983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.228008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.228141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.228167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.359 qpair failed and we were unable to recover it. 00:33:43.359 [2024-07-26 02:09:25.228277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.359 [2024-07-26 02:09:25.228306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.228416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.228444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.228582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.228608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.228726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.228753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.228879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.228918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.229033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.229067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.229191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.229218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.229336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.229363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.229471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.229496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.229627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.229652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.229770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.229796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.229903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.229932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.230041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.230074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.230186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.230213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.230344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.230371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.230480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.230506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.230623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.230651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.230783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.230810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.230951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.230977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.231092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.231119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.231252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.231282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.231417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.231443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.231542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.231567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.231673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.231701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.231840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.231866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.232008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.232034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.232143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.232169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.232278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.232305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.232404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.232431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.232533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.232559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.232686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.232712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.232834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.232873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.232984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.233011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.233149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.233176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.233320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.233346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.360 [2024-07-26 02:09:25.233459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.360 [2024-07-26 02:09:25.233485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.360 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.233593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.233618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.233721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.233746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.233860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.233885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.234035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.234081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.234201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.234230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.234362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.234389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.234523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.234549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.234710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.234737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.234850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.234877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.235042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.235079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.235192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.235218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.235334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.235364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.235483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.235509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.235648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.235673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.235785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.235811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.235920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.235947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.236050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.236083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.236203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.236242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.236386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.236414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.236549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.236576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.236678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.236704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.236816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.236843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.236946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.236972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.237074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.237101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.237218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.237243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.237372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.237410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.237574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.237602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.237744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.237770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.237871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.237896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.361 qpair failed and we were unable to recover it. 00:33:43.361 [2024-07-26 02:09:25.238003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.361 [2024-07-26 02:09:25.238029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.238157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.238183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.238321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.238347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.238457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.238483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.238616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.238642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.238752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.238778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.238930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.238969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.239102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.239141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.239284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.239312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.239421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.239448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.239553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.239579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.239693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.239719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.239824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.239850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.239978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.240003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.240122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.240161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.240281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.240308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.240418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.240444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.240558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.240585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.240693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.240719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.240826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.240852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.240956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.240982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.241090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.241115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.241237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.241276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.241400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.241428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.241572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.241599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.241705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.241731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.241839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.241868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.241984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.242010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.242127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.242155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.242269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.242295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.242393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.242419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.242523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.242549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.242650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.242675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.242787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.242827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.242978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.243017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.243143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.243171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.243297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.362 [2024-07-26 02:09:25.243324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.362 qpair failed and we were unable to recover it. 00:33:43.362 [2024-07-26 02:09:25.243440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.243466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.243578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.243605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.243716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.243742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.243879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.243905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.244018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.244047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.244167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.244194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.244336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.244363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.244476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.244503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.244603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.244629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.244729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.244756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.244893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.244920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.245066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.245095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.245202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.245233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.245343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.245369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.245467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.245493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.245598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.245624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.245729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.245755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.245891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.245918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.246041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.246071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.246181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.246208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.246315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.246341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.246470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.246495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.246597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.246623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.246724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.246749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.246855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.246885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.246992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.247021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.247145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.247172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.247280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.247306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.247437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.247463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.247571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.247598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.247736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.247763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.247871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.247900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.248049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.248099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.248207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.248235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.248347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.248373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.248489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.248515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.363 qpair failed and we were unable to recover it. 00:33:43.363 [2024-07-26 02:09:25.248619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.363 [2024-07-26 02:09:25.248645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.248750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.248776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.248914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.248942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.249080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.249113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.249225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.249251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.249363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.249390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.249502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.249529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.249694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.249722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.249841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.249868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.249975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.250002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.250147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.250173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.250277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.250304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.250421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.250447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.250585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.250611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.250720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.250747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.250877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.250915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.251083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.251111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.251235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.251261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.251378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.251405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.251515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.251540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.251645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.251670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:43.364 [2024-07-26 02:09:25.251808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:33:43.364 [2024-07-26 02:09:25.251835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.251945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:43.364 [2024-07-26 02:09:25.251973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.252093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.252121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:43.364 [2024-07-26 02:09:25.252227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.252253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.252364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.252391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.252497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.252523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.252621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.252648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.252758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.252784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.252896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.252923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.253088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.253115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.253226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.253252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.253349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.253375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.253482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.253508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.253649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.364 [2024-07-26 02:09:25.253676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.364 qpair failed and we were unable to recover it. 00:33:43.364 [2024-07-26 02:09:25.253791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.253817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.253932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.253959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.254072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.254099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.254221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.254250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.254373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.254400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.254511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.254538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.254639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.254676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.254783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.254810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.254917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.254944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.255105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.255132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.255235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.255261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.255384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.255410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.255520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.255548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.255658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.255685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.255827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.255854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.255997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.256036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.256182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.256221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.256388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.256425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.256537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.256564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.256661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.256687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.256826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.256860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.256975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.257001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.257128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.257155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.257274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.257301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.257416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.257443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.257560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.257586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.257723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.257749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.257870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.257896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.258016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.258042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.258167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.258194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.258334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.258362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.258496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.258522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.258625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.258651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.258778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.258817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.258955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.258983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.259108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.365 [2024-07-26 02:09:25.259137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.365 qpair failed and we were unable to recover it. 00:33:43.365 [2024-07-26 02:09:25.259251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.259278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.259426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.259453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.259566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.259593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.259721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.259748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.259902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.259929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.260041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.260092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.260202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.260229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.260341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.260368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.260477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.260504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.260685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.260711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.260852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.260882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.260990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.261017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.261156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.261183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.261290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.261316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.261427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.261453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.261560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.261586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.261717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.261756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.261872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.261899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.262055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.262101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.262219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.262249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.262356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.262390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.262504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.262532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.262638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.262674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.262806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.262833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.262978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.263016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.263150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.263177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.263294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.263333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.263498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.263526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.263652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.263681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.263812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.263839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.263943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.263970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.366 [2024-07-26 02:09:25.264122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.366 [2024-07-26 02:09:25.264149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.366 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.264269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.264294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.264402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.264429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.264565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.264592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.264700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.264725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.264836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.264862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.265013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.265052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.265178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.265207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.265310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.265336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.265477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.265504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.265625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.265653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.265824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.265869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.265982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.266010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.266139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.266179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.266290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.266317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.266462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.266489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.266594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.266621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.266770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.266808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.266947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.266983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.267118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.267151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.267263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.267291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.267416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.267443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.267578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.267605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.267733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.267759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.267866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.267893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.268020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.268065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.268192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.268220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.268328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.268363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.268502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.268529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.268675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.268702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.268840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.268866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.268977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.269003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.269121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.269149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 [2024-07-26 02:09:25.269301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.269330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:43.367 [2024-07-26 02:09:25.269443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.269470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:43.367 [2024-07-26 02:09:25.269578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.367 [2024-07-26 02:09:25.269604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.367 qpair failed and we were unable to recover it. 00:33:43.367 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.368 [2024-07-26 02:09:25.269712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.368 [2024-07-26 02:09:25.269738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.269891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.269916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.270017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.270042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.270161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.270187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.270295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.270321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.270428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.270453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.270585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.270610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.270713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.270738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.270834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.270864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.270985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.271025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.271141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.271168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.271284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.271310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.271428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.271454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.271596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.271623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.271730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.271758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.271865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.271892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.272010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.272036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.272183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.272209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.272311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.272337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.272436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.272461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.272570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.272595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.272707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.272732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.272860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.272899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.273030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.273078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.273200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.273227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.273362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.273389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.273494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.273520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.273625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.273651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.273762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.273789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.273912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.273951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.274082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.274120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.274239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.274267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.274372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.274397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.274514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.274540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.274646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.274672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.274780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.368 [2024-07-26 02:09:25.274810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.368 qpair failed and we were unable to recover it. 00:33:43.368 [2024-07-26 02:09:25.274924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.274964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.275094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.275123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.275235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.275261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.275370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.275398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.275538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.275564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.275674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.275700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.275807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.275834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.275955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.275994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.276129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.276158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.276272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.276300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.276444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.276470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.276604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.276630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.276739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.276767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.276914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.276941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.277068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.277107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.277216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.277243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.277401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.277427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.277569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.277595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.277704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.277731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.277836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.277863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.277983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.278021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.278170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.278197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.278307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.278333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.278494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.278520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.278630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.278656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.278795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.278821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.278939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.278979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.279151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.279190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.279304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.279332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.279501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.279527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.279661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.279687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.279795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.279822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.279920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.279946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.280102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.280142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.369 qpair failed and we were unable to recover it. 00:33:43.369 [2024-07-26 02:09:25.280262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.369 [2024-07-26 02:09:25.280289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.280408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.280435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.280548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.280574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.280708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.280734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.280848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.280875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.280979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.281011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.281136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.281163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.281277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.281304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.281431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.281457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.281617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.281644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.281784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.281810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.281928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.281956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.282065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.282092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.282195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.282221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.282317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.282343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.282446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.282472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.282580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.282606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.282717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.282742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.282881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.282906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.283018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.283044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.283161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.283189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.283302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.283329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.283445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.283472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.283581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.283608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.283752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.283791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.283935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.283963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.284089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.284116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.284227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.284254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.284367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.284394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.284571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.284598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.284710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.370 [2024-07-26 02:09:25.284745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.370 qpair failed and we were unable to recover it. 00:33:43.370 [2024-07-26 02:09:25.284863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.284889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.285008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.285054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.285186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.285214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.285330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.285367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.285479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.285505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.285614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.285642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.285749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.285775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.285909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.285936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.286057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.286091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.286204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.286231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.286355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.286382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.286521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.286547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.286679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.286730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.286922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.286949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.287080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.287106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.287252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.287278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.287391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.287419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.287524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.287551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.287668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.287695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.287837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.287863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.287997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.288023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.288134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.288161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.288277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.288303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.288415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.288441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.371 [2024-07-26 02:09:25.288544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.371 [2024-07-26 02:09:25.288571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.371 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.288706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.288733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.288857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.288896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.289007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.289035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.289177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.289203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.289318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.289344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.289482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.289508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.289610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.289636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.289753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.289780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.289954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.289993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.290127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.290155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.290277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.290303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.290475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.290501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.290601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.290628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.290747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.290774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.290914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.290941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.291067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.291097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.638 Malloc0 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.291218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.291250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.291381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.291408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.638 [2024-07-26 02:09:25.291525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.291552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:43.638 [2024-07-26 02:09:25.291653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.291680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.291786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.291813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.638 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.638 [2024-07-26 02:09:25.291927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.291954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.292074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.292103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.292222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.292261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.292379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.292406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.292516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.292542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.292649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.292675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.638 [2024-07-26 02:09:25.292825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.638 [2024-07-26 02:09:25.292851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.638 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.292976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.293015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.293154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.293181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.293326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.293363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.293477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.293503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.293621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.293646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.293753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.293781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.293916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.293943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.294056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.294086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.294195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.294221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.294336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.294363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.294483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.294510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.294639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.294665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.294779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.294807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.294815] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:43.639 [2024-07-26 02:09:25.294925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.294951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.295067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.295096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.295230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.295256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.295415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.295456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.295572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.295600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.295722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.295749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.295859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.295885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.296030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.296056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.296174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.296200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.296316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.296343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.296489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.296519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.296633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.296659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.296765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.296792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.296898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.296929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.297051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.297084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.297195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.297221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.297346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.297386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.297542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.297571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.297694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.297722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.639 [2024-07-26 02:09:25.297854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.639 [2024-07-26 02:09:25.297881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.639 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.298006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.298032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.298174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.298212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.298334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.298362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.298504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.298531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.298674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.298700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.298826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.298853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.298964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.298997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.299129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.299156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.299289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.299316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.299472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.299499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.299627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.299653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.299815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.299842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.299955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.299981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.300127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.300167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.300292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.300320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.300458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.300484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.300632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.300658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.300779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.300818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.300933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.300961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.301103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.301130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.301240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.301267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.301422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.301448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.301565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.301591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.301707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.301734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.301854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.301893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.302023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.302052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.302177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.302204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.302315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.302343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.302511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.302537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.302660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.302686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.302793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.302820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 [2024-07-26 02:09:25.302938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.302966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.640 qpair failed and we were unable to recover it. 00:33:43.640 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.640 [2024-07-26 02:09:25.303100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.640 [2024-07-26 02:09:25.303126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:43.641 [2024-07-26 02:09:25.303246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.303274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.641 [2024-07-26 02:09:25.303397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.303431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.641 [2024-07-26 02:09:25.303532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.303558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.303675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.303700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.303815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.303842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.303952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.303979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.304105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.304133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.304271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.304298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.304409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.304435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.304549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.304576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.304694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.304721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.304842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.304869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.304999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.305025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.305147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.305173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.305278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.305304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.305417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.305442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.305550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.305575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.305673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.305698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.305806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.305832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.305938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.305966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.306127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.306167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.306285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.306312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.306459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.306485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.306627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.306653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.306792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.306819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.306939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.306970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.307116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.307143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.307255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.307282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.307394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.307421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.307534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.307560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.307670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.641 [2024-07-26 02:09:25.307697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.641 qpair failed and we were unable to recover it. 00:33:43.641 [2024-07-26 02:09:25.307829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.307856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.307995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.308021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.308145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.308171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.308281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.308310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.308462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.308488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.308595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.308621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.308725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.308751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.308869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.308895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.309046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.309100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.309210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.309238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.309351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.309378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.309491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.309517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.309649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.309675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.309810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.309837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.309976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.310002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.310155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.310181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.310290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.310316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.310468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.310496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.310632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.310660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.310771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.310799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.310909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.310938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.311083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.642 [2024-07-26 02:09:25.311123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:43.642 [2024-07-26 02:09:25.311260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.311298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.642 [2024-07-26 02:09:25.311419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.311446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.642 [2024-07-26 02:09:25.311593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.311619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.311727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.311753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.311878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.311903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.312013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.312039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.312166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.312192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.312297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.312323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.312449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.312474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.642 qpair failed and we were unable to recover it. 00:33:43.642 [2024-07-26 02:09:25.312584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.642 [2024-07-26 02:09:25.312610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.312744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.312770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.312888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.312915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.313017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.313044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.313171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.313199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.313319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.313361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.313476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.313503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.313641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.313668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.313780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.313806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.313908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.313934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.314038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.314072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.314192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.314218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.314325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.314352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.314484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.314511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.314619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.314648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.314780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.314811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.314951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.314978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.315128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.315155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.315276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.315302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.315422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.315448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.315560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.315587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.315713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.315739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.315880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.315905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.316009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.316034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.316160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.316199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.316314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.316342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.316458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.316485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.643 [2024-07-26 02:09:25.316640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.643 [2024-07-26 02:09:25.316667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.643 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.316812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.316838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.316979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.317007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.317126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.317153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.317304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.317331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.317479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.317507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.317648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.317675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.317813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.317840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.317956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.317982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.318117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.318146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.318294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.318333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.318444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.318472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.318607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.318634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.318748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.318775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.318886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.318913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.319029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.319057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.644 [2024-07-26 02:09:25.319175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.319202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:43.644 [2024-07-26 02:09:25.319315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.319343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.644 [2024-07-26 02:09:25.319452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.319478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.644 [2024-07-26 02:09:25.319588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.319614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.319761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.319799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.319917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.319949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.320088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.320115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.320231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.320258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.320420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.320446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.320558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.320586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.320707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.320738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.320876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.320904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.321015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.321041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1545f40 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.321164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.321192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.321304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.321331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.644 [2024-07-26 02:09:25.321442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.644 [2024-07-26 02:09:25.321470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.644 qpair failed and we were unable to recover it. 00:33:43.645 [2024-07-26 02:09:25.321582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.645 [2024-07-26 02:09:25.321609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.645 qpair failed and we were unable to recover it. 00:33:43.645 [2024-07-26 02:09:25.321715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.645 [2024-07-26 02:09:25.321741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.645 qpair failed and we were unable to recover it. 00:33:43.645 [2024-07-26 02:09:25.321884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.645 [2024-07-26 02:09:25.321910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.645 qpair failed and we were unable to recover it. 00:33:43.645 [2024-07-26 02:09:25.322043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.645 [2024-07-26 02:09:25.322074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.645 qpair failed and we were unable to recover it. 00:33:43.645 [2024-07-26 02:09:25.322187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.645 [2024-07-26 02:09:25.322214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd150000b90 with addr=10.0.0.2, port=4420 00:33:43.645 qpair failed and we were unable to recover it. 00:33:43.645 [2024-07-26 02:09:25.322366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.645 [2024-07-26 02:09:25.322405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.645 qpair failed and we were unable to recover it. 00:33:43.645 [2024-07-26 02:09:25.322551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.645 [2024-07-26 02:09:25.322578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd158000b90 with addr=10.0.0.2, port=4420 00:33:43.645 qpair failed and we were unable to recover it. 00:33:43.645 [2024-07-26 02:09:25.322700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.645 [2024-07-26 02:09:25.322728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.645 qpair failed and we were unable to recover it. 00:33:43.645 [2024-07-26 02:09:25.322877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.645 [2024-07-26 02:09:25.322904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd148000b90 with addr=10.0.0.2, port=4420 00:33:43.645 qpair failed and we were unable to recover it. 00:33:43.645 [2024-07-26 02:09:25.323094] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.645 [2024-07-26 02:09:25.325499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.645 [2024-07-26 02:09:25.325675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.645 [2024-07-26 02:09:25.325704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.645 [2024-07-26 02:09:25.325720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.645 [2024-07-26 02:09:25.325734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.645 [2024-07-26 02:09:25.325778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.645 qpair failed and we were unable to recover it. 00:33:43.645 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.645 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:43.645 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.645 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:43.645 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.645 02:09:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2421720 00:33:43.645 [2024-07-26 02:09:25.335448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.645 [2024-07-26 02:09:25.335564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.645 [2024-07-26 02:09:25.335592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.645 [2024-07-26 02:09:25.335607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.645 [2024-07-26 02:09:25.335620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.645 [2024-07-26 02:09:25.335664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.645 qpair failed and we were unable to recover it. 00:33:43.645 [2024-07-26 02:09:25.345424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.645 [2024-07-26 02:09:25.345534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.645 [2024-07-26 02:09:25.345561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.645 [2024-07-26 02:09:25.345576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.645 [2024-07-26 02:09:25.345590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.645 [2024-07-26 02:09:25.345621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.645 qpair failed and we were unable to recover it. 00:33:43.645 [2024-07-26 02:09:25.355436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.645 [2024-07-26 02:09:25.355553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.645 [2024-07-26 02:09:25.355580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.645 [2024-07-26 02:09:25.355596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.645 [2024-07-26 02:09:25.355609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.645 [2024-07-26 02:09:25.355640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.645 qpair failed and we were unable to recover it. 00:33:43.645 [2024-07-26 02:09:25.365401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.645 [2024-07-26 02:09:25.365522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.645 [2024-07-26 02:09:25.365548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.645 [2024-07-26 02:09:25.365563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.645 [2024-07-26 02:09:25.365576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.645 [2024-07-26 02:09:25.365606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.645 qpair failed and we were unable to recover it. 00:33:43.645 [2024-07-26 02:09:25.375432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.645 [2024-07-26 02:09:25.375550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.645 [2024-07-26 02:09:25.375577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.645 [2024-07-26 02:09:25.375593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.645 [2024-07-26 02:09:25.375606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.645 [2024-07-26 02:09:25.375637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.645 qpair failed and we were unable to recover it. 00:33:43.645 [2024-07-26 02:09:25.385451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.645 [2024-07-26 02:09:25.385561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.645 [2024-07-26 02:09:25.385588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.645 [2024-07-26 02:09:25.385603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.645 [2024-07-26 02:09:25.385617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.645 [2024-07-26 02:09:25.385647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.645 qpair failed and we were unable to recover it. 00:33:43.645 [2024-07-26 02:09:25.395473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.645 [2024-07-26 02:09:25.395588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.645 [2024-07-26 02:09:25.395614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.645 [2024-07-26 02:09:25.395646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.645 [2024-07-26 02:09:25.395660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.646 [2024-07-26 02:09:25.395691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.646 qpair failed and we were unable to recover it. 00:33:43.646 [2024-07-26 02:09:25.405461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.646 [2024-07-26 02:09:25.405581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.646 [2024-07-26 02:09:25.405607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.646 [2024-07-26 02:09:25.405622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.646 [2024-07-26 02:09:25.405635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.646 [2024-07-26 02:09:25.405666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.646 qpair failed and we were unable to recover it. 00:33:43.646 [2024-07-26 02:09:25.415490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.646 [2024-07-26 02:09:25.415601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.646 [2024-07-26 02:09:25.415627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.646 [2024-07-26 02:09:25.415642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.646 [2024-07-26 02:09:25.415655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.646 [2024-07-26 02:09:25.415686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.646 qpair failed and we were unable to recover it. 00:33:43.646 [2024-07-26 02:09:25.425549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.646 [2024-07-26 02:09:25.425667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.646 [2024-07-26 02:09:25.425693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.646 [2024-07-26 02:09:25.425708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.646 [2024-07-26 02:09:25.425722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.646 [2024-07-26 02:09:25.425754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.646 qpair failed and we were unable to recover it. 00:33:43.646 [2024-07-26 02:09:25.435580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.646 [2024-07-26 02:09:25.435699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.646 [2024-07-26 02:09:25.435725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.646 [2024-07-26 02:09:25.435740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.646 [2024-07-26 02:09:25.435753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.646 [2024-07-26 02:09:25.435784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.646 qpair failed and we were unable to recover it. 00:33:43.646 [2024-07-26 02:09:25.445632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.646 [2024-07-26 02:09:25.445785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.646 [2024-07-26 02:09:25.445811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.646 [2024-07-26 02:09:25.445826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.646 [2024-07-26 02:09:25.445839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.646 [2024-07-26 02:09:25.445883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.646 qpair failed and we were unable to recover it. 00:33:43.646 [2024-07-26 02:09:25.455648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.646 [2024-07-26 02:09:25.455753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.646 [2024-07-26 02:09:25.455779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.646 [2024-07-26 02:09:25.455794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.646 [2024-07-26 02:09:25.455807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.646 [2024-07-26 02:09:25.455840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.646 qpair failed and we were unable to recover it. 00:33:43.646 [2024-07-26 02:09:25.465662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.646 [2024-07-26 02:09:25.465766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.646 [2024-07-26 02:09:25.465792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.646 [2024-07-26 02:09:25.465808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.646 [2024-07-26 02:09:25.465821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.646 [2024-07-26 02:09:25.465853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.646 qpair failed and we were unable to recover it. 00:33:43.646 [2024-07-26 02:09:25.475685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.646 [2024-07-26 02:09:25.475798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.646 [2024-07-26 02:09:25.475824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.646 [2024-07-26 02:09:25.475839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.646 [2024-07-26 02:09:25.475853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.646 [2024-07-26 02:09:25.475896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.646 qpair failed and we were unable to recover it. 00:33:43.646 [2024-07-26 02:09:25.485703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.646 [2024-07-26 02:09:25.485816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.646 [2024-07-26 02:09:25.485848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.646 [2024-07-26 02:09:25.485864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.646 [2024-07-26 02:09:25.485877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.646 [2024-07-26 02:09:25.485908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.646 qpair failed and we were unable to recover it. 00:33:43.646 [2024-07-26 02:09:25.495793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.646 [2024-07-26 02:09:25.495914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.646 [2024-07-26 02:09:25.495941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.646 [2024-07-26 02:09:25.495956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.646 [2024-07-26 02:09:25.495969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.646 [2024-07-26 02:09:25.496001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.646 qpair failed and we were unable to recover it. 00:33:43.646 [2024-07-26 02:09:25.505769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.646 [2024-07-26 02:09:25.505874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.646 [2024-07-26 02:09:25.505901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.646 [2024-07-26 02:09:25.505915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.646 [2024-07-26 02:09:25.505928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.646 [2024-07-26 02:09:25.505960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.646 qpair failed and we were unable to recover it. 00:33:43.646 [2024-07-26 02:09:25.515783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.646 [2024-07-26 02:09:25.515891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.646 [2024-07-26 02:09:25.515919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.646 [2024-07-26 02:09:25.515934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.646 [2024-07-26 02:09:25.515947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.646 [2024-07-26 02:09:25.515992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.647 qpair failed and we were unable to recover it. 00:33:43.647 [2024-07-26 02:09:25.525838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.647 [2024-07-26 02:09:25.525952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.647 [2024-07-26 02:09:25.525979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.647 [2024-07-26 02:09:25.525994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.647 [2024-07-26 02:09:25.526007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.647 [2024-07-26 02:09:25.526044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.647 qpair failed and we were unable to recover it. 00:33:43.647 [2024-07-26 02:09:25.535865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.647 [2024-07-26 02:09:25.535979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.647 [2024-07-26 02:09:25.536004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.647 [2024-07-26 02:09:25.536019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.647 [2024-07-26 02:09:25.536035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.647 [2024-07-26 02:09:25.536073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.647 qpair failed and we were unable to recover it. 00:33:43.647 [2024-07-26 02:09:25.545898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.647 [2024-07-26 02:09:25.546002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.647 [2024-07-26 02:09:25.546028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.647 [2024-07-26 02:09:25.546043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.647 [2024-07-26 02:09:25.546057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.647 [2024-07-26 02:09:25.546097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.647 qpair failed and we were unable to recover it. 00:33:43.647 [2024-07-26 02:09:25.555896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.647 [2024-07-26 02:09:25.556008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.647 [2024-07-26 02:09:25.556034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.647 [2024-07-26 02:09:25.556050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.647 [2024-07-26 02:09:25.556076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.647 [2024-07-26 02:09:25.556110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.647 qpair failed and we were unable to recover it. 00:33:43.647 [2024-07-26 02:09:25.565958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.647 [2024-07-26 02:09:25.566084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.647 [2024-07-26 02:09:25.566110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.647 [2024-07-26 02:09:25.566126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.647 [2024-07-26 02:09:25.566139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.647 [2024-07-26 02:09:25.566169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.647 qpair failed and we were unable to recover it. 00:33:43.647 [2024-07-26 02:09:25.575997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.647 [2024-07-26 02:09:25.576115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.647 [2024-07-26 02:09:25.576146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.647 [2024-07-26 02:09:25.576162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.647 [2024-07-26 02:09:25.576175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.647 [2024-07-26 02:09:25.576206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.647 qpair failed and we were unable to recover it. 00:33:43.647 [2024-07-26 02:09:25.586015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.647 [2024-07-26 02:09:25.586127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.647 [2024-07-26 02:09:25.586153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.647 [2024-07-26 02:09:25.586168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.647 [2024-07-26 02:09:25.586181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.647 [2024-07-26 02:09:25.586215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.647 qpair failed and we were unable to recover it. 00:33:43.647 [2024-07-26 02:09:25.596128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.647 [2024-07-26 02:09:25.596242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.647 [2024-07-26 02:09:25.596268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.647 [2024-07-26 02:09:25.596283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.647 [2024-07-26 02:09:25.596296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.647 [2024-07-26 02:09:25.596328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.647 qpair failed and we were unable to recover it. 00:33:43.647 [2024-07-26 02:09:25.606114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.647 [2024-07-26 02:09:25.606236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.647 [2024-07-26 02:09:25.606262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.647 [2024-07-26 02:09:25.606278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.647 [2024-07-26 02:09:25.606291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.647 [2024-07-26 02:09:25.606333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.647 qpair failed and we were unable to recover it. 00:33:43.647 [2024-07-26 02:09:25.616126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.647 [2024-07-26 02:09:25.616231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.647 [2024-07-26 02:09:25.616257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.647 [2024-07-26 02:09:25.616272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.647 [2024-07-26 02:09:25.616294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.647 [2024-07-26 02:09:25.616325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.647 qpair failed and we were unable to recover it. 00:33:43.647 [2024-07-26 02:09:25.626157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.647 [2024-07-26 02:09:25.626268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.647 [2024-07-26 02:09:25.626294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.647 [2024-07-26 02:09:25.626309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.647 [2024-07-26 02:09:25.626323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.647 [2024-07-26 02:09:25.626356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.647 qpair failed and we were unable to recover it. 00:33:43.647 [2024-07-26 02:09:25.636170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.647 [2024-07-26 02:09:25.636286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.647 [2024-07-26 02:09:25.636313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.647 [2024-07-26 02:09:25.636328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.647 [2024-07-26 02:09:25.636341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.648 [2024-07-26 02:09:25.636371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.648 qpair failed and we were unable to recover it. 00:33:43.910 [2024-07-26 02:09:25.646199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.910 [2024-07-26 02:09:25.646336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.910 [2024-07-26 02:09:25.646363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.910 [2024-07-26 02:09:25.646379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.910 [2024-07-26 02:09:25.646392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.910 [2024-07-26 02:09:25.646422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.910 qpair failed and we were unable to recover it. 00:33:43.910 [2024-07-26 02:09:25.656238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.910 [2024-07-26 02:09:25.656349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.910 [2024-07-26 02:09:25.656376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.910 [2024-07-26 02:09:25.656391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.910 [2024-07-26 02:09:25.656407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.910 [2024-07-26 02:09:25.656437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.910 qpair failed and we were unable to recover it. 00:33:43.910 [2024-07-26 02:09:25.666231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.910 [2024-07-26 02:09:25.666340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.910 [2024-07-26 02:09:25.666366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.910 [2024-07-26 02:09:25.666381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.910 [2024-07-26 02:09:25.666395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.910 [2024-07-26 02:09:25.666426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.910 qpair failed and we were unable to recover it. 00:33:43.910 [2024-07-26 02:09:25.676327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.910 [2024-07-26 02:09:25.676492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.910 [2024-07-26 02:09:25.676521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.910 [2024-07-26 02:09:25.676536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.910 [2024-07-26 02:09:25.676551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.910 [2024-07-26 02:09:25.676596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.910 qpair failed and we were unable to recover it. 00:33:43.910 [2024-07-26 02:09:25.686305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.910 [2024-07-26 02:09:25.686417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.910 [2024-07-26 02:09:25.686443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.910 [2024-07-26 02:09:25.686459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.910 [2024-07-26 02:09:25.686471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.910 [2024-07-26 02:09:25.686502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.910 qpair failed and we were unable to recover it. 00:33:43.910 [2024-07-26 02:09:25.696308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.910 [2024-07-26 02:09:25.696413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.910 [2024-07-26 02:09:25.696440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.910 [2024-07-26 02:09:25.696455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.910 [2024-07-26 02:09:25.696468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.910 [2024-07-26 02:09:25.696500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.910 qpair failed and we were unable to recover it. 00:33:43.910 [2024-07-26 02:09:25.706371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.910 [2024-07-26 02:09:25.706547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.910 [2024-07-26 02:09:25.706574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.910 [2024-07-26 02:09:25.706606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.910 [2024-07-26 02:09:25.706626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.910 [2024-07-26 02:09:25.706684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.910 qpair failed and we were unable to recover it. 00:33:43.910 [2024-07-26 02:09:25.716393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.910 [2024-07-26 02:09:25.716508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.910 [2024-07-26 02:09:25.716534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.910 [2024-07-26 02:09:25.716550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.910 [2024-07-26 02:09:25.716563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.910 [2024-07-26 02:09:25.716594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.910 qpair failed and we were unable to recover it. 00:33:43.910 [2024-07-26 02:09:25.726441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.910 [2024-07-26 02:09:25.726573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.910 [2024-07-26 02:09:25.726600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.910 [2024-07-26 02:09:25.726615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.910 [2024-07-26 02:09:25.726628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.910 [2024-07-26 02:09:25.726659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.910 qpair failed and we were unable to recover it. 00:33:43.910 [2024-07-26 02:09:25.736578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.910 [2024-07-26 02:09:25.736732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.910 [2024-07-26 02:09:25.736758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.910 [2024-07-26 02:09:25.736773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.910 [2024-07-26 02:09:25.736786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.910 [2024-07-26 02:09:25.736834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.910 qpair failed and we were unable to recover it. 00:33:43.910 [2024-07-26 02:09:25.746464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.910 [2024-07-26 02:09:25.746576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.910 [2024-07-26 02:09:25.746603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.910 [2024-07-26 02:09:25.746618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.910 [2024-07-26 02:09:25.746631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:43.910 [2024-07-26 02:09:25.746664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:43.910 qpair failed and we were unable to recover it. 00:33:43.910 [2024-07-26 02:09:25.756521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.910 [2024-07-26 02:09:25.756633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.911 [2024-07-26 02:09:25.756666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.911 [2024-07-26 02:09:25.756682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.911 [2024-07-26 02:09:25.756696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.911 [2024-07-26 02:09:25.756741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.911 qpair failed and we were unable to recover it. 00:33:43.911 [2024-07-26 02:09:25.766583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.911 [2024-07-26 02:09:25.766726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.911 [2024-07-26 02:09:25.766755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.911 [2024-07-26 02:09:25.766771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.911 [2024-07-26 02:09:25.766784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.911 [2024-07-26 02:09:25.766829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.911 qpair failed and we were unable to recover it. 00:33:43.911 [2024-07-26 02:09:25.776588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.911 [2024-07-26 02:09:25.776698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.911 [2024-07-26 02:09:25.776725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.911 [2024-07-26 02:09:25.776741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.911 [2024-07-26 02:09:25.776754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.911 [2024-07-26 02:09:25.776784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.911 qpair failed and we were unable to recover it. 00:33:43.911 [2024-07-26 02:09:25.786643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.911 [2024-07-26 02:09:25.786755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.911 [2024-07-26 02:09:25.786782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.911 [2024-07-26 02:09:25.786797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.911 [2024-07-26 02:09:25.786811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.911 [2024-07-26 02:09:25.786840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.911 qpair failed and we were unable to recover it. 00:33:43.911 [2024-07-26 02:09:25.796691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.911 [2024-07-26 02:09:25.796804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.911 [2024-07-26 02:09:25.796831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.911 [2024-07-26 02:09:25.796855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.911 [2024-07-26 02:09:25.796869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.911 [2024-07-26 02:09:25.796900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.911 qpair failed and we were unable to recover it. 00:33:43.911 [2024-07-26 02:09:25.806671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.911 [2024-07-26 02:09:25.806793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.911 [2024-07-26 02:09:25.806821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.911 [2024-07-26 02:09:25.806836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.911 [2024-07-26 02:09:25.806849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.911 [2024-07-26 02:09:25.806879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.911 qpair failed and we were unable to recover it. 00:33:43.911 [2024-07-26 02:09:25.816689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.911 [2024-07-26 02:09:25.816797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.911 [2024-07-26 02:09:25.816825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.911 [2024-07-26 02:09:25.816840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.911 [2024-07-26 02:09:25.816853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.911 [2024-07-26 02:09:25.816884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.911 qpair failed and we were unable to recover it. 00:33:43.911 [2024-07-26 02:09:25.826739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.911 [2024-07-26 02:09:25.826849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.911 [2024-07-26 02:09:25.826877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.911 [2024-07-26 02:09:25.826892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.911 [2024-07-26 02:09:25.826905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.911 [2024-07-26 02:09:25.826935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.911 qpair failed and we were unable to recover it. 00:33:43.911 [2024-07-26 02:09:25.836768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.911 [2024-07-26 02:09:25.836883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.911 [2024-07-26 02:09:25.836910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.911 [2024-07-26 02:09:25.836926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.911 [2024-07-26 02:09:25.836939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.911 [2024-07-26 02:09:25.836969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.911 qpair failed and we were unable to recover it. 00:33:43.911 [2024-07-26 02:09:25.846772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.911 [2024-07-26 02:09:25.846885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.911 [2024-07-26 02:09:25.846912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.911 [2024-07-26 02:09:25.846927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.911 [2024-07-26 02:09:25.846940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.911 [2024-07-26 02:09:25.846969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.911 qpair failed and we were unable to recover it. 00:33:43.911 [2024-07-26 02:09:25.856782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.911 [2024-07-26 02:09:25.856887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.911 [2024-07-26 02:09:25.856915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.912 [2024-07-26 02:09:25.856930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.912 [2024-07-26 02:09:25.856943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.912 [2024-07-26 02:09:25.856974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.912 qpair failed and we were unable to recover it. 00:33:43.912 [2024-07-26 02:09:25.866853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.912 [2024-07-26 02:09:25.866974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.912 [2024-07-26 02:09:25.867001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.912 [2024-07-26 02:09:25.867017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.912 [2024-07-26 02:09:25.867030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.912 [2024-07-26 02:09:25.867068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.912 qpair failed and we were unable to recover it. 00:33:43.912 [2024-07-26 02:09:25.876855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.912 [2024-07-26 02:09:25.877016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.912 [2024-07-26 02:09:25.877042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.912 [2024-07-26 02:09:25.877057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.912 [2024-07-26 02:09:25.877080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.912 [2024-07-26 02:09:25.877110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.912 qpair failed and we were unable to recover it. 00:33:43.912 [2024-07-26 02:09:25.886868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.912 [2024-07-26 02:09:25.886979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.912 [2024-07-26 02:09:25.887010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.912 [2024-07-26 02:09:25.887026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.912 [2024-07-26 02:09:25.887039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.912 [2024-07-26 02:09:25.887076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.912 qpair failed and we were unable to recover it. 00:33:43.912 [2024-07-26 02:09:25.896999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.912 [2024-07-26 02:09:25.897129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.912 [2024-07-26 02:09:25.897156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.912 [2024-07-26 02:09:25.897171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.912 [2024-07-26 02:09:25.897184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.912 [2024-07-26 02:09:25.897214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.912 qpair failed and we were unable to recover it. 00:33:43.912 [2024-07-26 02:09:25.906965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.912 [2024-07-26 02:09:25.907096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.912 [2024-07-26 02:09:25.907123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.912 [2024-07-26 02:09:25.907138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.912 [2024-07-26 02:09:25.907151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.912 [2024-07-26 02:09:25.907184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.912 qpair failed and we were unable to recover it. 00:33:43.912 [2024-07-26 02:09:25.916959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.912 [2024-07-26 02:09:25.917081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.912 [2024-07-26 02:09:25.917106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.912 [2024-07-26 02:09:25.917121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.912 [2024-07-26 02:09:25.917134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:43.912 [2024-07-26 02:09:25.917164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:43.912 qpair failed and we were unable to recover it. 00:33:44.172 [2024-07-26 02:09:25.926968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.172 [2024-07-26 02:09:25.927081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.172 [2024-07-26 02:09:25.927107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.172 [2024-07-26 02:09:25.927122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.172 [2024-07-26 02:09:25.927135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.172 [2024-07-26 02:09:25.927173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.172 qpair failed and we were unable to recover it. 00:33:44.172 [2024-07-26 02:09:25.937009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.172 [2024-07-26 02:09:25.937129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.172 [2024-07-26 02:09:25.937156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.172 [2024-07-26 02:09:25.937171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.172 [2024-07-26 02:09:25.937185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.172 [2024-07-26 02:09:25.937216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.172 qpair failed and we were unable to recover it. 00:33:44.172 [2024-07-26 02:09:25.947031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.172 [2024-07-26 02:09:25.947143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.172 [2024-07-26 02:09:25.947171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.172 [2024-07-26 02:09:25.947188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.172 [2024-07-26 02:09:25.947202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.172 [2024-07-26 02:09:25.947247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.172 qpair failed and we were unable to recover it. 00:33:44.172 [2024-07-26 02:09:25.957084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.172 [2024-07-26 02:09:25.957219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.172 [2024-07-26 02:09:25.957247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.172 [2024-07-26 02:09:25.957265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.172 [2024-07-26 02:09:25.957279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.172 [2024-07-26 02:09:25.957312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.172 qpair failed and we were unable to recover it. 00:33:44.172 [2024-07-26 02:09:25.967091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.172 [2024-07-26 02:09:25.967210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.173 [2024-07-26 02:09:25.967236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.173 [2024-07-26 02:09:25.967252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.173 [2024-07-26 02:09:25.967265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.173 [2024-07-26 02:09:25.967294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.173 qpair failed and we were unable to recover it. 00:33:44.173 [2024-07-26 02:09:25.977128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.173 [2024-07-26 02:09:25.977235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.173 [2024-07-26 02:09:25.977267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.173 [2024-07-26 02:09:25.977284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.173 [2024-07-26 02:09:25.977298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.173 [2024-07-26 02:09:25.977330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.173 qpair failed and we were unable to recover it. 00:33:44.173 [2024-07-26 02:09:25.987155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.173 [2024-07-26 02:09:25.987265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.173 [2024-07-26 02:09:25.987293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.173 [2024-07-26 02:09:25.987309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.173 [2024-07-26 02:09:25.987331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.173 [2024-07-26 02:09:25.987363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.173 qpair failed and we were unable to recover it. 00:33:44.173 [2024-07-26 02:09:25.997196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.173 [2024-07-26 02:09:25.997312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.173 [2024-07-26 02:09:25.997337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.173 [2024-07-26 02:09:25.997353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.173 [2024-07-26 02:09:25.997367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.173 [2024-07-26 02:09:25.997398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.173 qpair failed and we were unable to recover it. 00:33:44.173 [2024-07-26 02:09:26.007214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.173 [2024-07-26 02:09:26.007328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.173 [2024-07-26 02:09:26.007354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.173 [2024-07-26 02:09:26.007370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.173 [2024-07-26 02:09:26.007384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.173 [2024-07-26 02:09:26.007415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.173 qpair failed and we were unable to recover it. 00:33:44.173 [2024-07-26 02:09:26.017321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.173 [2024-07-26 02:09:26.017428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.173 [2024-07-26 02:09:26.017453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.173 [2024-07-26 02:09:26.017469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.173 [2024-07-26 02:09:26.017483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.173 [2024-07-26 02:09:26.017523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.173 qpair failed and we were unable to recover it. 00:33:44.173 [2024-07-26 02:09:26.027282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.173 [2024-07-26 02:09:26.027385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.173 [2024-07-26 02:09:26.027411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.173 [2024-07-26 02:09:26.027426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.173 [2024-07-26 02:09:26.027441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.173 [2024-07-26 02:09:26.027472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.173 qpair failed and we were unable to recover it. 00:33:44.173 [2024-07-26 02:09:26.037296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.173 [2024-07-26 02:09:26.037409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.173 [2024-07-26 02:09:26.037435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.173 [2024-07-26 02:09:26.037451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.173 [2024-07-26 02:09:26.037465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.173 [2024-07-26 02:09:26.037496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.173 qpair failed and we were unable to recover it. 00:33:44.173 [2024-07-26 02:09:26.047324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.173 [2024-07-26 02:09:26.047477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.173 [2024-07-26 02:09:26.047506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.173 [2024-07-26 02:09:26.047523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.173 [2024-07-26 02:09:26.047537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.173 [2024-07-26 02:09:26.047569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.173 qpair failed and we were unable to recover it. 00:33:44.173 [2024-07-26 02:09:26.057445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.173 [2024-07-26 02:09:26.057557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.173 [2024-07-26 02:09:26.057582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.173 [2024-07-26 02:09:26.057598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.173 [2024-07-26 02:09:26.057612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.173 [2024-07-26 02:09:26.057643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.173 qpair failed and we were unable to recover it. 00:33:44.173 [2024-07-26 02:09:26.067370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.173 [2024-07-26 02:09:26.067484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.173 [2024-07-26 02:09:26.067510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.173 [2024-07-26 02:09:26.067527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.173 [2024-07-26 02:09:26.067541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.173 [2024-07-26 02:09:26.067572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.173 qpair failed and we were unable to recover it. 00:33:44.173 [2024-07-26 02:09:26.077406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.173 [2024-07-26 02:09:26.077516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.173 [2024-07-26 02:09:26.077542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.173 [2024-07-26 02:09:26.077558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.173 [2024-07-26 02:09:26.077573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.173 [2024-07-26 02:09:26.077604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.173 qpair failed and we were unable to recover it. 00:33:44.173 [2024-07-26 02:09:26.087417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.173 [2024-07-26 02:09:26.087527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.174 [2024-07-26 02:09:26.087553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.174 [2024-07-26 02:09:26.087568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.174 [2024-07-26 02:09:26.087583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.174 [2024-07-26 02:09:26.087614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.174 qpair failed and we were unable to recover it. 00:33:44.174 [2024-07-26 02:09:26.097461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.174 [2024-07-26 02:09:26.097581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.174 [2024-07-26 02:09:26.097607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.174 [2024-07-26 02:09:26.097623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.174 [2024-07-26 02:09:26.097638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.174 [2024-07-26 02:09:26.097669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.174 qpair failed and we were unable to recover it. 00:33:44.174 [2024-07-26 02:09:26.107526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.174 [2024-07-26 02:09:26.107636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.174 [2024-07-26 02:09:26.107662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.174 [2024-07-26 02:09:26.107678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.174 [2024-07-26 02:09:26.107699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.174 [2024-07-26 02:09:26.107732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.174 qpair failed and we were unable to recover it. 00:33:44.174 [2024-07-26 02:09:26.117560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.174 [2024-07-26 02:09:26.117672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.174 [2024-07-26 02:09:26.117698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.174 [2024-07-26 02:09:26.117714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.174 [2024-07-26 02:09:26.117729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.174 [2024-07-26 02:09:26.117759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.174 qpair failed and we were unable to recover it. 00:33:44.174 [2024-07-26 02:09:26.127542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.174 [2024-07-26 02:09:26.127651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.174 [2024-07-26 02:09:26.127676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.174 [2024-07-26 02:09:26.127692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.174 [2024-07-26 02:09:26.127707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.174 [2024-07-26 02:09:26.127738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.174 qpair failed and we were unable to recover it. 00:33:44.174 [2024-07-26 02:09:26.137672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.174 [2024-07-26 02:09:26.137791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.174 [2024-07-26 02:09:26.137816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.174 [2024-07-26 02:09:26.137833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.174 [2024-07-26 02:09:26.137847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.174 [2024-07-26 02:09:26.137878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.174 qpair failed and we were unable to recover it. 00:33:44.174 [2024-07-26 02:09:26.147643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.174 [2024-07-26 02:09:26.147765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.174 [2024-07-26 02:09:26.147793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.174 [2024-07-26 02:09:26.147809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.174 [2024-07-26 02:09:26.147823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.174 [2024-07-26 02:09:26.147856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.174 qpair failed and we were unable to recover it. 00:33:44.174 [2024-07-26 02:09:26.157781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.174 [2024-07-26 02:09:26.157896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.174 [2024-07-26 02:09:26.157923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.174 [2024-07-26 02:09:26.157940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.174 [2024-07-26 02:09:26.157955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.174 [2024-07-26 02:09:26.157988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.174 qpair failed and we were unable to recover it. 00:33:44.174 [2024-07-26 02:09:26.167796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.174 [2024-07-26 02:09:26.167899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.174 [2024-07-26 02:09:26.167926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.174 [2024-07-26 02:09:26.167943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.174 [2024-07-26 02:09:26.167959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.174 [2024-07-26 02:09:26.167990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.174 qpair failed and we were unable to recover it. 00:33:44.174 [2024-07-26 02:09:26.177695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.174 [2024-07-26 02:09:26.177802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.174 [2024-07-26 02:09:26.177829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.174 [2024-07-26 02:09:26.177845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.174 [2024-07-26 02:09:26.177861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.174 [2024-07-26 02:09:26.177892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.174 qpair failed and we were unable to recover it. 00:33:44.434 [2024-07-26 02:09:26.187742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.434 [2024-07-26 02:09:26.187852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.434 [2024-07-26 02:09:26.187877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.434 [2024-07-26 02:09:26.187893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.434 [2024-07-26 02:09:26.187907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.434 [2024-07-26 02:09:26.187940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.434 qpair failed and we were unable to recover it. 00:33:44.434 [2024-07-26 02:09:26.197774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.434 [2024-07-26 02:09:26.197905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.434 [2024-07-26 02:09:26.197932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.434 [2024-07-26 02:09:26.197954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.434 [2024-07-26 02:09:26.197968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.434 [2024-07-26 02:09:26.198000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.434 qpair failed and we were unable to recover it. 00:33:44.435 [2024-07-26 02:09:26.207787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.435 [2024-07-26 02:09:26.207904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.435 [2024-07-26 02:09:26.207929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.435 [2024-07-26 02:09:26.207945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.435 [2024-07-26 02:09:26.207959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.435 [2024-07-26 02:09:26.207990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.435 qpair failed and we were unable to recover it. 00:33:44.435 [2024-07-26 02:09:26.217812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.435 [2024-07-26 02:09:26.217921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.435 [2024-07-26 02:09:26.217946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.435 [2024-07-26 02:09:26.217962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.435 [2024-07-26 02:09:26.217977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.435 [2024-07-26 02:09:26.218019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.435 qpair failed and we were unable to recover it. 00:33:44.435 [2024-07-26 02:09:26.227833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.435 [2024-07-26 02:09:26.227977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.435 [2024-07-26 02:09:26.228004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.435 [2024-07-26 02:09:26.228021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.435 [2024-07-26 02:09:26.228035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.435 [2024-07-26 02:09:26.228074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.435 qpair failed and we were unable to recover it. 00:33:44.435 [2024-07-26 02:09:26.237857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.435 [2024-07-26 02:09:26.238007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.435 [2024-07-26 02:09:26.238033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.435 [2024-07-26 02:09:26.238049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.435 [2024-07-26 02:09:26.238072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.435 [2024-07-26 02:09:26.238105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.435 qpair failed and we were unable to recover it. 00:33:44.435 [2024-07-26 02:09:26.247930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.435 [2024-07-26 02:09:26.248035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.435 [2024-07-26 02:09:26.248068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.435 [2024-07-26 02:09:26.248086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.435 [2024-07-26 02:09:26.248102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.435 [2024-07-26 02:09:26.248133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.435 qpair failed and we were unable to recover it. 00:33:44.435 [2024-07-26 02:09:26.257949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.435 [2024-07-26 02:09:26.258086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.435 [2024-07-26 02:09:26.258112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.435 [2024-07-26 02:09:26.258128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.435 [2024-07-26 02:09:26.258143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.435 [2024-07-26 02:09:26.258175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.435 qpair failed and we were unable to recover it. 00:33:44.435 [2024-07-26 02:09:26.267951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.435 [2024-07-26 02:09:26.268053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.435 [2024-07-26 02:09:26.268086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.435 [2024-07-26 02:09:26.268113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.435 [2024-07-26 02:09:26.268126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.435 [2024-07-26 02:09:26.268157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.435 qpair failed and we were unable to recover it. 00:33:44.435 [2024-07-26 02:09:26.277983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.435 [2024-07-26 02:09:26.278097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.435 [2024-07-26 02:09:26.278123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.435 [2024-07-26 02:09:26.278139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.435 [2024-07-26 02:09:26.278154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.435 [2024-07-26 02:09:26.278185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.435 qpair failed and we were unable to recover it. 00:33:44.435 [2024-07-26 02:09:26.288013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.435 [2024-07-26 02:09:26.288138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.435 [2024-07-26 02:09:26.288169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.435 [2024-07-26 02:09:26.288190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.435 [2024-07-26 02:09:26.288204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.435 [2024-07-26 02:09:26.288235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.435 qpair failed and we were unable to recover it. 00:33:44.435 [2024-07-26 02:09:26.298029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.435 [2024-07-26 02:09:26.298157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.435 [2024-07-26 02:09:26.298185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.435 [2024-07-26 02:09:26.298203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.435 [2024-07-26 02:09:26.298219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.435 [2024-07-26 02:09:26.298250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.435 qpair failed and we were unable to recover it. 00:33:44.435 [2024-07-26 02:09:26.308094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.435 [2024-07-26 02:09:26.308217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.435 [2024-07-26 02:09:26.308245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.435 [2024-07-26 02:09:26.308262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.435 [2024-07-26 02:09:26.308278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.435 [2024-07-26 02:09:26.308331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.435 qpair failed and we were unable to recover it. 00:33:44.435 [2024-07-26 02:09:26.318117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.435 [2024-07-26 02:09:26.318240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.435 [2024-07-26 02:09:26.318276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.435 [2024-07-26 02:09:26.318293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.436 [2024-07-26 02:09:26.318308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.436 [2024-07-26 02:09:26.318340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.436 qpair failed and we were unable to recover it. 00:33:44.436 [2024-07-26 02:09:26.328134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.436 [2024-07-26 02:09:26.328239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.436 [2024-07-26 02:09:26.328264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.436 [2024-07-26 02:09:26.328280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.436 [2024-07-26 02:09:26.328295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.436 [2024-07-26 02:09:26.328331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.436 qpair failed and we were unable to recover it. 00:33:44.436 [2024-07-26 02:09:26.338152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.436 [2024-07-26 02:09:26.338260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.436 [2024-07-26 02:09:26.338287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.436 [2024-07-26 02:09:26.338303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.436 [2024-07-26 02:09:26.338318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.436 [2024-07-26 02:09:26.338349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.436 qpair failed and we were unable to recover it. 00:33:44.436 [2024-07-26 02:09:26.348174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.436 [2024-07-26 02:09:26.348334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.436 [2024-07-26 02:09:26.348362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.436 [2024-07-26 02:09:26.348378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.436 [2024-07-26 02:09:26.348394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.436 [2024-07-26 02:09:26.348425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.436 qpair failed and we were unable to recover it. 00:33:44.436 [2024-07-26 02:09:26.358242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.436 [2024-07-26 02:09:26.358357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.436 [2024-07-26 02:09:26.358386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.436 [2024-07-26 02:09:26.358402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.436 [2024-07-26 02:09:26.358417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.436 [2024-07-26 02:09:26.358448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.436 qpair failed and we were unable to recover it. 00:33:44.436 [2024-07-26 02:09:26.368260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.436 [2024-07-26 02:09:26.368366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.436 [2024-07-26 02:09:26.368394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.436 [2024-07-26 02:09:26.368410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.436 [2024-07-26 02:09:26.368425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.436 [2024-07-26 02:09:26.368467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.436 qpair failed and we were unable to recover it. 00:33:44.436 [2024-07-26 02:09:26.378316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.436 [2024-07-26 02:09:26.378449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.436 [2024-07-26 02:09:26.378483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.436 [2024-07-26 02:09:26.378502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.436 [2024-07-26 02:09:26.378519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.436 [2024-07-26 02:09:26.378566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.436 qpair failed and we were unable to recover it. 00:33:44.436 [2024-07-26 02:09:26.388296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.436 [2024-07-26 02:09:26.388424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.436 [2024-07-26 02:09:26.388452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.436 [2024-07-26 02:09:26.388468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.436 [2024-07-26 02:09:26.388484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.436 [2024-07-26 02:09:26.388515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.436 qpair failed and we were unable to recover it. 00:33:44.436 [2024-07-26 02:09:26.398336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.436 [2024-07-26 02:09:26.398448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.436 [2024-07-26 02:09:26.398476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.436 [2024-07-26 02:09:26.398491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.436 [2024-07-26 02:09:26.398507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.436 [2024-07-26 02:09:26.398537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.436 qpair failed and we were unable to recover it. 00:33:44.436 [2024-07-26 02:09:26.408365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.436 [2024-07-26 02:09:26.408488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.436 [2024-07-26 02:09:26.408515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.436 [2024-07-26 02:09:26.408531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.436 [2024-07-26 02:09:26.408546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.436 [2024-07-26 02:09:26.408577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.436 qpair failed and we were unable to recover it. 00:33:44.436 [2024-07-26 02:09:26.418372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.436 [2024-07-26 02:09:26.418475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.436 [2024-07-26 02:09:26.418502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.436 [2024-07-26 02:09:26.418518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.436 [2024-07-26 02:09:26.418533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.436 [2024-07-26 02:09:26.418570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.436 qpair failed and we were unable to recover it. 00:33:44.436 [2024-07-26 02:09:26.428404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.436 [2024-07-26 02:09:26.428508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.436 [2024-07-26 02:09:26.428535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.436 [2024-07-26 02:09:26.428551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.436 [2024-07-26 02:09:26.428566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.436 [2024-07-26 02:09:26.428597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.436 qpair failed and we were unable to recover it. 00:33:44.436 [2024-07-26 02:09:26.438427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.436 [2024-07-26 02:09:26.438535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.436 [2024-07-26 02:09:26.438562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.436 [2024-07-26 02:09:26.438578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.437 [2024-07-26 02:09:26.438593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.437 [2024-07-26 02:09:26.438623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.437 qpair failed and we were unable to recover it. 00:33:44.697 [2024-07-26 02:09:26.448461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.697 [2024-07-26 02:09:26.448590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.697 [2024-07-26 02:09:26.448617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.697 [2024-07-26 02:09:26.448633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.697 [2024-07-26 02:09:26.448649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.697 [2024-07-26 02:09:26.448680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.697 qpair failed and we were unable to recover it. 00:33:44.697 [2024-07-26 02:09:26.458555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.697 [2024-07-26 02:09:26.458680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.697 [2024-07-26 02:09:26.458708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.697 [2024-07-26 02:09:26.458728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.697 [2024-07-26 02:09:26.458746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.697 [2024-07-26 02:09:26.458793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.697 qpair failed and we were unable to recover it. 00:33:44.697 [2024-07-26 02:09:26.468509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.697 [2024-07-26 02:09:26.468618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.697 [2024-07-26 02:09:26.468651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.697 [2024-07-26 02:09:26.468668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.697 [2024-07-26 02:09:26.468683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.697 [2024-07-26 02:09:26.468715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.697 qpair failed and we were unable to recover it. 00:33:44.697 [2024-07-26 02:09:26.478572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.697 [2024-07-26 02:09:26.478698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.697 [2024-07-26 02:09:26.478725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.697 [2024-07-26 02:09:26.478741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.697 [2024-07-26 02:09:26.478757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.697 [2024-07-26 02:09:26.478788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.697 qpair failed and we were unable to recover it. 00:33:44.697 [2024-07-26 02:09:26.488593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.697 [2024-07-26 02:09:26.488730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.697 [2024-07-26 02:09:26.488757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.697 [2024-07-26 02:09:26.488774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.697 [2024-07-26 02:09:26.488789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.698 [2024-07-26 02:09:26.488821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.698 qpair failed and we were unable to recover it. 00:33:44.698 [2024-07-26 02:09:26.498620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.698 [2024-07-26 02:09:26.498772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.698 [2024-07-26 02:09:26.498800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.698 [2024-07-26 02:09:26.498816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.698 [2024-07-26 02:09:26.498831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.698 [2024-07-26 02:09:26.498875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.698 qpair failed and we were unable to recover it. 00:33:44.698 [2024-07-26 02:09:26.508625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.698 [2024-07-26 02:09:26.508728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.698 [2024-07-26 02:09:26.508755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.698 [2024-07-26 02:09:26.508771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.698 [2024-07-26 02:09:26.508791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.698 [2024-07-26 02:09:26.508823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.698 qpair failed and we were unable to recover it. 00:33:44.698 [2024-07-26 02:09:26.518657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.698 [2024-07-26 02:09:26.518769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.698 [2024-07-26 02:09:26.518796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.698 [2024-07-26 02:09:26.518812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.698 [2024-07-26 02:09:26.518826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.698 [2024-07-26 02:09:26.518857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.698 qpair failed and we were unable to recover it. 00:33:44.698 [2024-07-26 02:09:26.528698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.698 [2024-07-26 02:09:26.528809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.698 [2024-07-26 02:09:26.528837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.698 [2024-07-26 02:09:26.528853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.698 [2024-07-26 02:09:26.528868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.698 [2024-07-26 02:09:26.528899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.698 qpair failed and we were unable to recover it. 00:33:44.698 [2024-07-26 02:09:26.538689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.698 [2024-07-26 02:09:26.538798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.698 [2024-07-26 02:09:26.538825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.698 [2024-07-26 02:09:26.538841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.698 [2024-07-26 02:09:26.538856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.698 [2024-07-26 02:09:26.538887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.698 qpair failed and we were unable to recover it. 00:33:44.698 [2024-07-26 02:09:26.548828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.698 [2024-07-26 02:09:26.548944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.698 [2024-07-26 02:09:26.548970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.698 [2024-07-26 02:09:26.549002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.698 [2024-07-26 02:09:26.549018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.698 [2024-07-26 02:09:26.549071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.698 qpair failed and we were unable to recover it. 00:33:44.698 [2024-07-26 02:09:26.558799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.698 [2024-07-26 02:09:26.558921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.698 [2024-07-26 02:09:26.558948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.698 [2024-07-26 02:09:26.558964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.698 [2024-07-26 02:09:26.558979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.698 [2024-07-26 02:09:26.559011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.698 qpair failed and we were unable to recover it. 00:33:44.698 [2024-07-26 02:09:26.568797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.698 [2024-07-26 02:09:26.568922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.698 [2024-07-26 02:09:26.568949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.698 [2024-07-26 02:09:26.568965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.698 [2024-07-26 02:09:26.568980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.698 [2024-07-26 02:09:26.569010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.698 qpair failed and we were unable to recover it. 00:33:44.698 [2024-07-26 02:09:26.578847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.698 [2024-07-26 02:09:26.578954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.698 [2024-07-26 02:09:26.578980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.698 [2024-07-26 02:09:26.578996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.698 [2024-07-26 02:09:26.579010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.698 [2024-07-26 02:09:26.579041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.698 qpair failed and we were unable to recover it. 00:33:44.698 [2024-07-26 02:09:26.588888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.698 [2024-07-26 02:09:26.588993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.698 [2024-07-26 02:09:26.589020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.698 [2024-07-26 02:09:26.589036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.698 [2024-07-26 02:09:26.589051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.698 [2024-07-26 02:09:26.589090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.698 qpair failed and we were unable to recover it. 00:33:44.698 [2024-07-26 02:09:26.598985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.698 [2024-07-26 02:09:26.599136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.698 [2024-07-26 02:09:26.599163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.698 [2024-07-26 02:09:26.599184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.698 [2024-07-26 02:09:26.599199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.698 [2024-07-26 02:09:26.599231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.698 qpair failed and we were unable to recover it. 00:33:44.698 [2024-07-26 02:09:26.608910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.698 [2024-07-26 02:09:26.609054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.698 [2024-07-26 02:09:26.609088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.698 [2024-07-26 02:09:26.609105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.698 [2024-07-26 02:09:26.609120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.699 [2024-07-26 02:09:26.609151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.699 qpair failed and we were unable to recover it. 00:33:44.699 [2024-07-26 02:09:26.618950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.699 [2024-07-26 02:09:26.619054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.699 [2024-07-26 02:09:26.619095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.699 [2024-07-26 02:09:26.619114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.699 [2024-07-26 02:09:26.619127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.699 [2024-07-26 02:09:26.619159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.699 qpair failed and we were unable to recover it. 00:33:44.699 [2024-07-26 02:09:26.628968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.699 [2024-07-26 02:09:26.629080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.699 [2024-07-26 02:09:26.629108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.699 [2024-07-26 02:09:26.629124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.699 [2024-07-26 02:09:26.629138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.699 [2024-07-26 02:09:26.629170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.699 qpair failed and we were unable to recover it. 00:33:44.699 [2024-07-26 02:09:26.639027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.699 [2024-07-26 02:09:26.639150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.699 [2024-07-26 02:09:26.639177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.699 [2024-07-26 02:09:26.639193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.699 [2024-07-26 02:09:26.639208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.699 [2024-07-26 02:09:26.639240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.699 qpair failed and we were unable to recover it. 00:33:44.699 [2024-07-26 02:09:26.649046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.699 [2024-07-26 02:09:26.649171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.699 [2024-07-26 02:09:26.649198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.699 [2024-07-26 02:09:26.649215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.699 [2024-07-26 02:09:26.649231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.699 [2024-07-26 02:09:26.649262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.699 qpair failed and we were unable to recover it. 00:33:44.699 [2024-07-26 02:09:26.659076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.699 [2024-07-26 02:09:26.659196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.699 [2024-07-26 02:09:26.659223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.699 [2024-07-26 02:09:26.659239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.699 [2024-07-26 02:09:26.659255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.699 [2024-07-26 02:09:26.659287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.699 qpair failed and we were unable to recover it. 00:33:44.699 [2024-07-26 02:09:26.669140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.699 [2024-07-26 02:09:26.669254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.699 [2024-07-26 02:09:26.669282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.699 [2024-07-26 02:09:26.669301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.699 [2024-07-26 02:09:26.669318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.699 [2024-07-26 02:09:26.669350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.699 qpair failed and we were unable to recover it. 00:33:44.699 [2024-07-26 02:09:26.679145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.699 [2024-07-26 02:09:26.679264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.699 [2024-07-26 02:09:26.679291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.699 [2024-07-26 02:09:26.679308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.699 [2024-07-26 02:09:26.679323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.699 [2024-07-26 02:09:26.679355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.699 qpair failed and we were unable to recover it. 00:33:44.699 [2024-07-26 02:09:26.689163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.699 [2024-07-26 02:09:26.689278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.699 [2024-07-26 02:09:26.689305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.699 [2024-07-26 02:09:26.689328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.699 [2024-07-26 02:09:26.689344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.699 [2024-07-26 02:09:26.689376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.699 qpair failed and we were unable to recover it. 00:33:44.699 [2024-07-26 02:09:26.699192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.699 [2024-07-26 02:09:26.699299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.699 [2024-07-26 02:09:26.699326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.699 [2024-07-26 02:09:26.699342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.699 [2024-07-26 02:09:26.699358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.699 [2024-07-26 02:09:26.699389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.699 qpair failed and we were unable to recover it. 00:33:44.959 [2024-07-26 02:09:26.709212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.959 [2024-07-26 02:09:26.709327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.959 [2024-07-26 02:09:26.709355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.959 [2024-07-26 02:09:26.709375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.959 [2024-07-26 02:09:26.709391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.959 [2024-07-26 02:09:26.709422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.960 qpair failed and we were unable to recover it. 00:33:44.960 [2024-07-26 02:09:26.719268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.960 [2024-07-26 02:09:26.719386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.960 [2024-07-26 02:09:26.719413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.960 [2024-07-26 02:09:26.719429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.960 [2024-07-26 02:09:26.719445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.960 [2024-07-26 02:09:26.719476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.960 qpair failed and we were unable to recover it. 00:33:44.960 [2024-07-26 02:09:26.729272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.960 [2024-07-26 02:09:26.729387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.960 [2024-07-26 02:09:26.729413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.960 [2024-07-26 02:09:26.729429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.960 [2024-07-26 02:09:26.729444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.960 [2024-07-26 02:09:26.729475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.960 qpair failed and we were unable to recover it. 00:33:44.960 [2024-07-26 02:09:26.739336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.960 [2024-07-26 02:09:26.739459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.960 [2024-07-26 02:09:26.739488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.960 [2024-07-26 02:09:26.739504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.960 [2024-07-26 02:09:26.739519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.960 [2024-07-26 02:09:26.739562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.960 qpair failed and we were unable to recover it. 00:33:44.960 [2024-07-26 02:09:26.749316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.960 [2024-07-26 02:09:26.749441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.960 [2024-07-26 02:09:26.749469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.960 [2024-07-26 02:09:26.749485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.960 [2024-07-26 02:09:26.749501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.960 [2024-07-26 02:09:26.749532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.960 qpair failed and we were unable to recover it. 00:33:44.960 [2024-07-26 02:09:26.759361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.960 [2024-07-26 02:09:26.759471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.960 [2024-07-26 02:09:26.759498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.960 [2024-07-26 02:09:26.759514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.960 [2024-07-26 02:09:26.759530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.960 [2024-07-26 02:09:26.759560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.960 qpair failed and we were unable to recover it. 00:33:44.960 [2024-07-26 02:09:26.769380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.960 [2024-07-26 02:09:26.769529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.960 [2024-07-26 02:09:26.769556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.960 [2024-07-26 02:09:26.769572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.960 [2024-07-26 02:09:26.769588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.960 [2024-07-26 02:09:26.769619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.960 qpair failed and we were unable to recover it. 00:33:44.960 [2024-07-26 02:09:26.779407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.960 [2024-07-26 02:09:26.779514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.960 [2024-07-26 02:09:26.779546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.960 [2024-07-26 02:09:26.779563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.960 [2024-07-26 02:09:26.779578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.960 [2024-07-26 02:09:26.779609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.960 qpair failed and we were unable to recover it. 00:33:44.960 [2024-07-26 02:09:26.789438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.960 [2024-07-26 02:09:26.789543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.960 [2024-07-26 02:09:26.789570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.960 [2024-07-26 02:09:26.789586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.960 [2024-07-26 02:09:26.789601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.960 [2024-07-26 02:09:26.789633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.960 qpair failed and we were unable to recover it. 00:33:44.960 [2024-07-26 02:09:26.799484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.960 [2024-07-26 02:09:26.799593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.960 [2024-07-26 02:09:26.799619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.960 [2024-07-26 02:09:26.799635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.960 [2024-07-26 02:09:26.799651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.960 [2024-07-26 02:09:26.799682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.960 qpair failed and we were unable to recover it. 00:33:44.960 [2024-07-26 02:09:26.809483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.960 [2024-07-26 02:09:26.809609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.960 [2024-07-26 02:09:26.809636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.960 [2024-07-26 02:09:26.809652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.960 [2024-07-26 02:09:26.809667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.960 [2024-07-26 02:09:26.809698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.960 qpair failed and we were unable to recover it. 00:33:44.960 [2024-07-26 02:09:26.819498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.960 [2024-07-26 02:09:26.819605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.960 [2024-07-26 02:09:26.819632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.960 [2024-07-26 02:09:26.819648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.960 [2024-07-26 02:09:26.819663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.960 [2024-07-26 02:09:26.819701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.960 qpair failed and we were unable to recover it. 00:33:44.960 [2024-07-26 02:09:26.829558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.961 [2024-07-26 02:09:26.829674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.961 [2024-07-26 02:09:26.829700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.961 [2024-07-26 02:09:26.829716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.961 [2024-07-26 02:09:26.829731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.961 [2024-07-26 02:09:26.829762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.961 qpair failed and we were unable to recover it. 00:33:44.961 [2024-07-26 02:09:26.839666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.961 [2024-07-26 02:09:26.839790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.961 [2024-07-26 02:09:26.839817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.961 [2024-07-26 02:09:26.839833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.961 [2024-07-26 02:09:26.839848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.961 [2024-07-26 02:09:26.839878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.961 qpair failed and we were unable to recover it. 00:33:44.961 [2024-07-26 02:09:26.849584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.961 [2024-07-26 02:09:26.849694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.961 [2024-07-26 02:09:26.849720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.961 [2024-07-26 02:09:26.849736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.961 [2024-07-26 02:09:26.849751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.961 [2024-07-26 02:09:26.849782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.961 qpair failed and we were unable to recover it. 00:33:44.961 [2024-07-26 02:09:26.859656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.961 [2024-07-26 02:09:26.859780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.961 [2024-07-26 02:09:26.859806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.961 [2024-07-26 02:09:26.859823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.961 [2024-07-26 02:09:26.859838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.961 [2024-07-26 02:09:26.859896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.961 qpair failed and we were unable to recover it. 00:33:44.961 [2024-07-26 02:09:26.869648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.961 [2024-07-26 02:09:26.869789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.961 [2024-07-26 02:09:26.869821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.961 [2024-07-26 02:09:26.869838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.961 [2024-07-26 02:09:26.869853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.961 [2024-07-26 02:09:26.869884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.961 qpair failed and we were unable to recover it. 00:33:44.961 [2024-07-26 02:09:26.879695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.961 [2024-07-26 02:09:26.879823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.961 [2024-07-26 02:09:26.879850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.961 [2024-07-26 02:09:26.879865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.961 [2024-07-26 02:09:26.879881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.961 [2024-07-26 02:09:26.879912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.961 qpair failed and we were unable to recover it. 00:33:44.961 [2024-07-26 02:09:26.889726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.961 [2024-07-26 02:09:26.889851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.961 [2024-07-26 02:09:26.889878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.961 [2024-07-26 02:09:26.889894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.961 [2024-07-26 02:09:26.889909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.961 [2024-07-26 02:09:26.889939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.961 qpair failed and we were unable to recover it. 00:33:44.961 [2024-07-26 02:09:26.899724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.961 [2024-07-26 02:09:26.899832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.961 [2024-07-26 02:09:26.899858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.961 [2024-07-26 02:09:26.899874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.961 [2024-07-26 02:09:26.899889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.961 [2024-07-26 02:09:26.899920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.961 qpair failed and we were unable to recover it. 00:33:44.961 [2024-07-26 02:09:26.909780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.961 [2024-07-26 02:09:26.909896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.961 [2024-07-26 02:09:26.909923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.961 [2024-07-26 02:09:26.909939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.961 [2024-07-26 02:09:26.909959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.961 [2024-07-26 02:09:26.909990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.961 qpair failed and we were unable to recover it. 00:33:44.961 [2024-07-26 02:09:26.919849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.961 [2024-07-26 02:09:26.920009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.961 [2024-07-26 02:09:26.920036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.961 [2024-07-26 02:09:26.920055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.961 [2024-07-26 02:09:26.920078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.961 [2024-07-26 02:09:26.920110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.961 qpair failed and we were unable to recover it. 00:33:44.961 [2024-07-26 02:09:26.929810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.961 [2024-07-26 02:09:26.929920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.961 [2024-07-26 02:09:26.929947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.961 [2024-07-26 02:09:26.929963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.961 [2024-07-26 02:09:26.929978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.961 [2024-07-26 02:09:26.930010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.961 qpair failed and we were unable to recover it. 00:33:44.961 [2024-07-26 02:09:26.939862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.961 [2024-07-26 02:09:26.939980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.961 [2024-07-26 02:09:26.940007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.961 [2024-07-26 02:09:26.940023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.961 [2024-07-26 02:09:26.940037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.961 [2024-07-26 02:09:26.940078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.961 qpair failed and we were unable to recover it. 00:33:44.962 [2024-07-26 02:09:26.949966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.962 [2024-07-26 02:09:26.950085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.962 [2024-07-26 02:09:26.950112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.962 [2024-07-26 02:09:26.950128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.962 [2024-07-26 02:09:26.950143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.962 [2024-07-26 02:09:26.950175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.962 qpair failed and we were unable to recover it. 00:33:44.962 [2024-07-26 02:09:26.959939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.962 [2024-07-26 02:09:26.960070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.962 [2024-07-26 02:09:26.960099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.962 [2024-07-26 02:09:26.960116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.962 [2024-07-26 02:09:26.960131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:44.962 [2024-07-26 02:09:26.960163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.962 qpair failed and we were unable to recover it. 00:33:45.222 [2024-07-26 02:09:26.969951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.222 [2024-07-26 02:09:26.970089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.222 [2024-07-26 02:09:26.970115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.222 [2024-07-26 02:09:26.970131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.222 [2024-07-26 02:09:26.970145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.222 [2024-07-26 02:09:26.970176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.222 qpair failed and we were unable to recover it. 00:33:45.222 [2024-07-26 02:09:26.979958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.222 [2024-07-26 02:09:26.980067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.222 [2024-07-26 02:09:26.980095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.222 [2024-07-26 02:09:26.980111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.222 [2024-07-26 02:09:26.980125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.222 [2024-07-26 02:09:26.980157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.222 qpair failed and we were unable to recover it. 00:33:45.222 [2024-07-26 02:09:26.989993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.222 [2024-07-26 02:09:26.990125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.222 [2024-07-26 02:09:26.990155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.222 [2024-07-26 02:09:26.990172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.222 [2024-07-26 02:09:26.990190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.222 [2024-07-26 02:09:26.990223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.222 qpair failed and we were unable to recover it. 00:33:45.222 [2024-07-26 02:09:27.000000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.222 [2024-07-26 02:09:27.000118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.222 [2024-07-26 02:09:27.000146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.222 [2024-07-26 02:09:27.000167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.222 [2024-07-26 02:09:27.000183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.222 [2024-07-26 02:09:27.000214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.222 qpair failed and we were unable to recover it. 00:33:45.222 [2024-07-26 02:09:27.010037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.222 [2024-07-26 02:09:27.010163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.222 [2024-07-26 02:09:27.010191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.222 [2024-07-26 02:09:27.010207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.222 [2024-07-26 02:09:27.010222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.222 [2024-07-26 02:09:27.010253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.222 qpair failed and we were unable to recover it. 00:33:45.222 [2024-07-26 02:09:27.020050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.222 [2024-07-26 02:09:27.020161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.222 [2024-07-26 02:09:27.020187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.222 [2024-07-26 02:09:27.020204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.222 [2024-07-26 02:09:27.020219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.222 [2024-07-26 02:09:27.020250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.222 qpair failed and we were unable to recover it. 00:33:45.222 [2024-07-26 02:09:27.030099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.222 [2024-07-26 02:09:27.030204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.222 [2024-07-26 02:09:27.030230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.222 [2024-07-26 02:09:27.030246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.222 [2024-07-26 02:09:27.030262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.222 [2024-07-26 02:09:27.030293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.222 qpair failed and we were unable to recover it. 00:33:45.222 [2024-07-26 02:09:27.040159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.222 [2024-07-26 02:09:27.040320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.222 [2024-07-26 02:09:27.040347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.222 [2024-07-26 02:09:27.040363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.222 [2024-07-26 02:09:27.040379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.222 [2024-07-26 02:09:27.040410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.222 qpair failed and we were unable to recover it. 00:33:45.222 [2024-07-26 02:09:27.050151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.222 [2024-07-26 02:09:27.050266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.222 [2024-07-26 02:09:27.050293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.222 [2024-07-26 02:09:27.050308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.222 [2024-07-26 02:09:27.050323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.222 [2024-07-26 02:09:27.050354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.222 qpair failed and we were unable to recover it. 00:33:45.222 [2024-07-26 02:09:27.060198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.222 [2024-07-26 02:09:27.060332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.222 [2024-07-26 02:09:27.060359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.222 [2024-07-26 02:09:27.060375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.222 [2024-07-26 02:09:27.060390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.222 [2024-07-26 02:09:27.060421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.222 qpair failed and we were unable to recover it. 00:33:45.222 [2024-07-26 02:09:27.070261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.222 [2024-07-26 02:09:27.070413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.222 [2024-07-26 02:09:27.070440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.223 [2024-07-26 02:09:27.070456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.223 [2024-07-26 02:09:27.070471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.223 [2024-07-26 02:09:27.070502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.223 qpair failed and we were unable to recover it. 00:33:45.223 [2024-07-26 02:09:27.080275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.223 [2024-07-26 02:09:27.080386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.223 [2024-07-26 02:09:27.080412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.223 [2024-07-26 02:09:27.080428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.223 [2024-07-26 02:09:27.080443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.223 [2024-07-26 02:09:27.080487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.223 qpair failed and we were unable to recover it. 00:33:45.223 [2024-07-26 02:09:27.090286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.223 [2024-07-26 02:09:27.090399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.223 [2024-07-26 02:09:27.090426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.223 [2024-07-26 02:09:27.090448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.223 [2024-07-26 02:09:27.090464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.223 [2024-07-26 02:09:27.090495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.223 qpair failed and we were unable to recover it. 00:33:45.223 [2024-07-26 02:09:27.100301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.223 [2024-07-26 02:09:27.100443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.223 [2024-07-26 02:09:27.100471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.223 [2024-07-26 02:09:27.100487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.223 [2024-07-26 02:09:27.100502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.223 [2024-07-26 02:09:27.100545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.223 qpair failed and we were unable to recover it. 00:33:45.223 [2024-07-26 02:09:27.110383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.223 [2024-07-26 02:09:27.110532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.223 [2024-07-26 02:09:27.110559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.223 [2024-07-26 02:09:27.110575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.223 [2024-07-26 02:09:27.110588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.223 [2024-07-26 02:09:27.110620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.223 qpair failed and we were unable to recover it. 00:33:45.223 [2024-07-26 02:09:27.120376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.223 [2024-07-26 02:09:27.120491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.223 [2024-07-26 02:09:27.120519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.223 [2024-07-26 02:09:27.120536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.223 [2024-07-26 02:09:27.120550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.223 [2024-07-26 02:09:27.120582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.223 qpair failed and we were unable to recover it. 00:33:45.223 [2024-07-26 02:09:27.130435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.223 [2024-07-26 02:09:27.130551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.223 [2024-07-26 02:09:27.130578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.223 [2024-07-26 02:09:27.130595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.223 [2024-07-26 02:09:27.130610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.223 [2024-07-26 02:09:27.130658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.223 qpair failed and we were unable to recover it. 00:33:45.223 [2024-07-26 02:09:27.140411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.223 [2024-07-26 02:09:27.140513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.223 [2024-07-26 02:09:27.140541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.223 [2024-07-26 02:09:27.140557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.223 [2024-07-26 02:09:27.140572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.223 [2024-07-26 02:09:27.140605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.223 qpair failed and we were unable to recover it. 00:33:45.223 [2024-07-26 02:09:27.150436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.223 [2024-07-26 02:09:27.150536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.223 [2024-07-26 02:09:27.150562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.223 [2024-07-26 02:09:27.150578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.223 [2024-07-26 02:09:27.150593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.223 [2024-07-26 02:09:27.150625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.223 qpair failed and we were unable to recover it. 00:33:45.223 [2024-07-26 02:09:27.160479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.223 [2024-07-26 02:09:27.160594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.223 [2024-07-26 02:09:27.160620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.223 [2024-07-26 02:09:27.160637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.223 [2024-07-26 02:09:27.160652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.223 [2024-07-26 02:09:27.160683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.223 qpair failed and we were unable to recover it. 00:33:45.223 [2024-07-26 02:09:27.170484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.223 [2024-07-26 02:09:27.170589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.223 [2024-07-26 02:09:27.170616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.223 [2024-07-26 02:09:27.170632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.223 [2024-07-26 02:09:27.170647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.223 [2024-07-26 02:09:27.170678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.223 qpair failed and we were unable to recover it. 00:33:45.223 [2024-07-26 02:09:27.180566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.223 [2024-07-26 02:09:27.180677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.223 [2024-07-26 02:09:27.180708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.223 [2024-07-26 02:09:27.180725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.223 [2024-07-26 02:09:27.180740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.223 [2024-07-26 02:09:27.180771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.223 qpair failed and we were unable to recover it. 00:33:45.223 [2024-07-26 02:09:27.190563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.223 [2024-07-26 02:09:27.190666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.224 [2024-07-26 02:09:27.190693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.224 [2024-07-26 02:09:27.190709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.224 [2024-07-26 02:09:27.190725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.224 [2024-07-26 02:09:27.190756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.224 qpair failed and we were unable to recover it. 00:33:45.224 [2024-07-26 02:09:27.200638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.224 [2024-07-26 02:09:27.200766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.224 [2024-07-26 02:09:27.200792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.224 [2024-07-26 02:09:27.200808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.224 [2024-07-26 02:09:27.200824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.224 [2024-07-26 02:09:27.200855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.224 qpair failed and we were unable to recover it. 00:33:45.224 [2024-07-26 02:09:27.210595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.224 [2024-07-26 02:09:27.210706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.224 [2024-07-26 02:09:27.210733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.224 [2024-07-26 02:09:27.210749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.224 [2024-07-26 02:09:27.210765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.224 [2024-07-26 02:09:27.210796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.224 qpair failed and we were unable to recover it. 00:33:45.224 [2024-07-26 02:09:27.220661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.224 [2024-07-26 02:09:27.220763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.224 [2024-07-26 02:09:27.220789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.224 [2024-07-26 02:09:27.220805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.224 [2024-07-26 02:09:27.220820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.224 [2024-07-26 02:09:27.220858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.224 qpair failed and we were unable to recover it. 00:33:45.224 [2024-07-26 02:09:27.230695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.224 [2024-07-26 02:09:27.230812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.224 [2024-07-26 02:09:27.230839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.224 [2024-07-26 02:09:27.230855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.224 [2024-07-26 02:09:27.230870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.224 [2024-07-26 02:09:27.230901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.224 qpair failed and we were unable to recover it. 00:33:45.484 [2024-07-26 02:09:27.240772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.484 [2024-07-26 02:09:27.240889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.484 [2024-07-26 02:09:27.240915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.484 [2024-07-26 02:09:27.240931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.484 [2024-07-26 02:09:27.240946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.484 [2024-07-26 02:09:27.240977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.484 qpair failed and we were unable to recover it. 00:33:45.484 [2024-07-26 02:09:27.250747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.484 [2024-07-26 02:09:27.250859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.484 [2024-07-26 02:09:27.250885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.484 [2024-07-26 02:09:27.250902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.484 [2024-07-26 02:09:27.250916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.484 [2024-07-26 02:09:27.250948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.484 qpair failed and we were unable to recover it. 00:33:45.484 [2024-07-26 02:09:27.260769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.484 [2024-07-26 02:09:27.260875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.484 [2024-07-26 02:09:27.260903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.484 [2024-07-26 02:09:27.260919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.484 [2024-07-26 02:09:27.260934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.484 [2024-07-26 02:09:27.260965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.484 qpair failed and we were unable to recover it. 00:33:45.484 [2024-07-26 02:09:27.270779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.484 [2024-07-26 02:09:27.270890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.484 [2024-07-26 02:09:27.270922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.484 [2024-07-26 02:09:27.270939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.484 [2024-07-26 02:09:27.270954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.484 [2024-07-26 02:09:27.270985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.484 qpair failed and we were unable to recover it. 00:33:45.484 [2024-07-26 02:09:27.280826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.484 [2024-07-26 02:09:27.280940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.484 [2024-07-26 02:09:27.280967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.484 [2024-07-26 02:09:27.280983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.484 [2024-07-26 02:09:27.280998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.484 [2024-07-26 02:09:27.281029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.484 qpair failed and we were unable to recover it. 00:33:45.484 [2024-07-26 02:09:27.290852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.484 [2024-07-26 02:09:27.290963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.484 [2024-07-26 02:09:27.290990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.484 [2024-07-26 02:09:27.291006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.484 [2024-07-26 02:09:27.291021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.484 [2024-07-26 02:09:27.291053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.484 qpair failed and we were unable to recover it. 00:33:45.484 [2024-07-26 02:09:27.300965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.484 [2024-07-26 02:09:27.301129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.484 [2024-07-26 02:09:27.301156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.484 [2024-07-26 02:09:27.301171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.484 [2024-07-26 02:09:27.301185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.484 [2024-07-26 02:09:27.301218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.484 qpair failed and we were unable to recover it. 00:33:45.484 [2024-07-26 02:09:27.310947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.484 [2024-07-26 02:09:27.311053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.484 [2024-07-26 02:09:27.311087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.484 [2024-07-26 02:09:27.311104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.484 [2024-07-26 02:09:27.311124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.484 [2024-07-26 02:09:27.311156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.484 qpair failed and we were unable to recover it. 00:33:45.484 [2024-07-26 02:09:27.320945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.484 [2024-07-26 02:09:27.321069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.484 [2024-07-26 02:09:27.321096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.484 [2024-07-26 02:09:27.321112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.484 [2024-07-26 02:09:27.321126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.484 [2024-07-26 02:09:27.321158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.484 qpair failed and we were unable to recover it. 00:33:45.484 [2024-07-26 02:09:27.330950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.484 [2024-07-26 02:09:27.331070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.485 [2024-07-26 02:09:27.331097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.485 [2024-07-26 02:09:27.331114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.485 [2024-07-26 02:09:27.331129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.485 [2024-07-26 02:09:27.331163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.485 qpair failed and we were unable to recover it. 00:33:45.485 [2024-07-26 02:09:27.340986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.485 [2024-07-26 02:09:27.341102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.485 [2024-07-26 02:09:27.341129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.485 [2024-07-26 02:09:27.341153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.485 [2024-07-26 02:09:27.341167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.485 [2024-07-26 02:09:27.341211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.485 qpair failed and we were unable to recover it. 00:33:45.485 [2024-07-26 02:09:27.351005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.485 [2024-07-26 02:09:27.351129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.485 [2024-07-26 02:09:27.351155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.485 [2024-07-26 02:09:27.351172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.485 [2024-07-26 02:09:27.351186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.485 [2024-07-26 02:09:27.351218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.485 qpair failed and we were unable to recover it. 00:33:45.485 [2024-07-26 02:09:27.361093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.485 [2024-07-26 02:09:27.361212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.485 [2024-07-26 02:09:27.361239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.485 [2024-07-26 02:09:27.361254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.485 [2024-07-26 02:09:27.361269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.485 [2024-07-26 02:09:27.361301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.485 qpair failed and we were unable to recover it. 00:33:45.485 [2024-07-26 02:09:27.371117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.485 [2024-07-26 02:09:27.371237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.485 [2024-07-26 02:09:27.371263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.485 [2024-07-26 02:09:27.371280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.485 [2024-07-26 02:09:27.371295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.485 [2024-07-26 02:09:27.371326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.485 qpair failed and we were unable to recover it. 00:33:45.485 [2024-07-26 02:09:27.381186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.485 [2024-07-26 02:09:27.381325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.485 [2024-07-26 02:09:27.381352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.485 [2024-07-26 02:09:27.381372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.485 [2024-07-26 02:09:27.381386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.485 [2024-07-26 02:09:27.381416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.485 qpair failed and we were unable to recover it. 00:33:45.485 [2024-07-26 02:09:27.391138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.485 [2024-07-26 02:09:27.391297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.485 [2024-07-26 02:09:27.391324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.485 [2024-07-26 02:09:27.391340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.485 [2024-07-26 02:09:27.391357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.485 [2024-07-26 02:09:27.391388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.485 qpair failed and we were unable to recover it. 00:33:45.485 [2024-07-26 02:09:27.401171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.485 [2024-07-26 02:09:27.401303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.485 [2024-07-26 02:09:27.401329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.485 [2024-07-26 02:09:27.401349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.485 [2024-07-26 02:09:27.401368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.485 [2024-07-26 02:09:27.401400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.485 qpair failed and we were unable to recover it. 00:33:45.485 [2024-07-26 02:09:27.411188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.485 [2024-07-26 02:09:27.411303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.485 [2024-07-26 02:09:27.411329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.485 [2024-07-26 02:09:27.411347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.485 [2024-07-26 02:09:27.411360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.485 [2024-07-26 02:09:27.411392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.485 qpair failed and we were unable to recover it. 00:33:45.485 [2024-07-26 02:09:27.421292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.485 [2024-07-26 02:09:27.421455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.485 [2024-07-26 02:09:27.421483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.485 [2024-07-26 02:09:27.421498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.485 [2024-07-26 02:09:27.421513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.485 [2024-07-26 02:09:27.421544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.485 qpair failed and we were unable to recover it. 00:33:45.485 [2024-07-26 02:09:27.431303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.485 [2024-07-26 02:09:27.431425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.485 [2024-07-26 02:09:27.431452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.485 [2024-07-26 02:09:27.431469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.485 [2024-07-26 02:09:27.431484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.485 [2024-07-26 02:09:27.431527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.485 qpair failed and we were unable to recover it. 00:33:45.485 [2024-07-26 02:09:27.441286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.485 [2024-07-26 02:09:27.441410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.485 [2024-07-26 02:09:27.441436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.485 [2024-07-26 02:09:27.441453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.485 [2024-07-26 02:09:27.441468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.485 [2024-07-26 02:09:27.441499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.485 qpair failed and we were unable to recover it. 00:33:45.485 [2024-07-26 02:09:27.451321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.485 [2024-07-26 02:09:27.451438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.486 [2024-07-26 02:09:27.451464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.486 [2024-07-26 02:09:27.451480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.486 [2024-07-26 02:09:27.451495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.486 [2024-07-26 02:09:27.451527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.486 qpair failed and we were unable to recover it. 00:33:45.486 [2024-07-26 02:09:27.461456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.486 [2024-07-26 02:09:27.461563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.486 [2024-07-26 02:09:27.461590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.486 [2024-07-26 02:09:27.461606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.486 [2024-07-26 02:09:27.461621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.486 [2024-07-26 02:09:27.461651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.486 qpair failed and we were unable to recover it. 00:33:45.486 [2024-07-26 02:09:27.471376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.486 [2024-07-26 02:09:27.471487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.486 [2024-07-26 02:09:27.471514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.486 [2024-07-26 02:09:27.471530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.486 [2024-07-26 02:09:27.471545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.486 [2024-07-26 02:09:27.471588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.486 qpair failed and we were unable to recover it. 00:33:45.486 [2024-07-26 02:09:27.481420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.486 [2024-07-26 02:09:27.481578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.486 [2024-07-26 02:09:27.481605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.486 [2024-07-26 02:09:27.481621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.486 [2024-07-26 02:09:27.481636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.486 [2024-07-26 02:09:27.481694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.486 qpair failed and we were unable to recover it. 00:33:45.486 [2024-07-26 02:09:27.491447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.486 [2024-07-26 02:09:27.491586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.486 [2024-07-26 02:09:27.491623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.486 [2024-07-26 02:09:27.491646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.486 [2024-07-26 02:09:27.491661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.486 [2024-07-26 02:09:27.491702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.486 qpair failed and we were unable to recover it. 00:33:45.749 [2024-07-26 02:09:27.501538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.749 [2024-07-26 02:09:27.501653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.749 [2024-07-26 02:09:27.501679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.749 [2024-07-26 02:09:27.501704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.749 [2024-07-26 02:09:27.501717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.749 [2024-07-26 02:09:27.501749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.749 qpair failed and we were unable to recover it. 00:33:45.749 [2024-07-26 02:09:27.511547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.749 [2024-07-26 02:09:27.511692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.749 [2024-07-26 02:09:27.511719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.749 [2024-07-26 02:09:27.511735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.749 [2024-07-26 02:09:27.511750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.749 [2024-07-26 02:09:27.511780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.749 qpair failed and we were unable to recover it. 00:33:45.749 [2024-07-26 02:09:27.521503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.749 [2024-07-26 02:09:27.521670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.749 [2024-07-26 02:09:27.521696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.749 [2024-07-26 02:09:27.521712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.749 [2024-07-26 02:09:27.521728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.749 [2024-07-26 02:09:27.521759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.749 qpair failed and we were unable to recover it. 00:33:45.749 [2024-07-26 02:09:27.531561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.749 [2024-07-26 02:09:27.531681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.749 [2024-07-26 02:09:27.531707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.749 [2024-07-26 02:09:27.531723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.749 [2024-07-26 02:09:27.531738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.749 [2024-07-26 02:09:27.531779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.749 qpair failed and we were unable to recover it. 00:33:45.749 [2024-07-26 02:09:27.541658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.749 [2024-07-26 02:09:27.541812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.749 [2024-07-26 02:09:27.541839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.749 [2024-07-26 02:09:27.541855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.749 [2024-07-26 02:09:27.541873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.749 [2024-07-26 02:09:27.541906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.749 qpair failed and we were unable to recover it. 00:33:45.749 [2024-07-26 02:09:27.551724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.749 [2024-07-26 02:09:27.551837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.749 [2024-07-26 02:09:27.551864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.749 [2024-07-26 02:09:27.551880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.749 [2024-07-26 02:09:27.551895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.749 [2024-07-26 02:09:27.551927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.749 qpair failed and we were unable to recover it. 00:33:45.749 [2024-07-26 02:09:27.561629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.749 [2024-07-26 02:09:27.561738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.749 [2024-07-26 02:09:27.561764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.749 [2024-07-26 02:09:27.561780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.749 [2024-07-26 02:09:27.561795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.749 [2024-07-26 02:09:27.561826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.749 qpair failed and we were unable to recover it. 00:33:45.749 [2024-07-26 02:09:27.571658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.749 [2024-07-26 02:09:27.571768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.749 [2024-07-26 02:09:27.571795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.749 [2024-07-26 02:09:27.571812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.750 [2024-07-26 02:09:27.571826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.750 [2024-07-26 02:09:27.571860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.750 qpair failed and we were unable to recover it. 00:33:45.750 [2024-07-26 02:09:27.581702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.750 [2024-07-26 02:09:27.581809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.750 [2024-07-26 02:09:27.581841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.750 [2024-07-26 02:09:27.581858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.750 [2024-07-26 02:09:27.581872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.750 [2024-07-26 02:09:27.581902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.750 qpair failed and we were unable to recover it. 00:33:45.750 [2024-07-26 02:09:27.591741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.750 [2024-07-26 02:09:27.591849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.750 [2024-07-26 02:09:27.591876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.750 [2024-07-26 02:09:27.591892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.750 [2024-07-26 02:09:27.591906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.750 [2024-07-26 02:09:27.591939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.750 qpair failed and we were unable to recover it. 00:33:45.750 [2024-07-26 02:09:27.601756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.750 [2024-07-26 02:09:27.601889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.750 [2024-07-26 02:09:27.601914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.750 [2024-07-26 02:09:27.601929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.750 [2024-07-26 02:09:27.601944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.750 [2024-07-26 02:09:27.601975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.750 qpair failed and we were unable to recover it. 00:33:45.750 [2024-07-26 02:09:27.611761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.750 [2024-07-26 02:09:27.611878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.750 [2024-07-26 02:09:27.611905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.750 [2024-07-26 02:09:27.611921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.750 [2024-07-26 02:09:27.611934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.750 [2024-07-26 02:09:27.611966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.750 qpair failed and we were unable to recover it. 00:33:45.750 [2024-07-26 02:09:27.621882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.750 [2024-07-26 02:09:27.622043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.750 [2024-07-26 02:09:27.622077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.750 [2024-07-26 02:09:27.622095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.750 [2024-07-26 02:09:27.622109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.750 [2024-07-26 02:09:27.622146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.750 qpair failed and we were unable to recover it. 00:33:45.750 [2024-07-26 02:09:27.631827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.750 [2024-07-26 02:09:27.631940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.750 [2024-07-26 02:09:27.631967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.750 [2024-07-26 02:09:27.631984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.750 [2024-07-26 02:09:27.631998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.750 [2024-07-26 02:09:27.632031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.750 qpair failed and we were unable to recover it. 00:33:45.750 [2024-07-26 02:09:27.641865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.750 [2024-07-26 02:09:27.641978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.750 [2024-07-26 02:09:27.642015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.750 [2024-07-26 02:09:27.642031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.750 [2024-07-26 02:09:27.642046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.750 [2024-07-26 02:09:27.642085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.750 qpair failed and we were unable to recover it. 00:33:45.750 [2024-07-26 02:09:27.651880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.750 [2024-07-26 02:09:27.652012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.750 [2024-07-26 02:09:27.652039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.750 [2024-07-26 02:09:27.652055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.750 [2024-07-26 02:09:27.652077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.750 [2024-07-26 02:09:27.652108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.750 qpair failed and we were unable to recover it. 00:33:45.750 [2024-07-26 02:09:27.661924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.750 [2024-07-26 02:09:27.662031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.750 [2024-07-26 02:09:27.662066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.750 [2024-07-26 02:09:27.662084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.750 [2024-07-26 02:09:27.662099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.750 [2024-07-26 02:09:27.662132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.750 qpair failed and we were unable to recover it. 00:33:45.750 [2024-07-26 02:09:27.671972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.750 [2024-07-26 02:09:27.672085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.750 [2024-07-26 02:09:27.672125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.750 [2024-07-26 02:09:27.672141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.750 [2024-07-26 02:09:27.672156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.750 [2024-07-26 02:09:27.672194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.750 qpair failed and we were unable to recover it. 00:33:45.750 [2024-07-26 02:09:27.681973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.750 [2024-07-26 02:09:27.682098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.750 [2024-07-26 02:09:27.682124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.750 [2024-07-26 02:09:27.682140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.750 [2024-07-26 02:09:27.682154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.750 [2024-07-26 02:09:27.682185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.750 qpair failed and we were unable to recover it. 00:33:45.750 [2024-07-26 02:09:27.691995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.750 [2024-07-26 02:09:27.692135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.750 [2024-07-26 02:09:27.692162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.751 [2024-07-26 02:09:27.692177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.751 [2024-07-26 02:09:27.692191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.751 [2024-07-26 02:09:27.692223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.751 qpair failed and we were unable to recover it. 00:33:45.751 [2024-07-26 02:09:27.702068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.751 [2024-07-26 02:09:27.702193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.751 [2024-07-26 02:09:27.702220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.751 [2024-07-26 02:09:27.702236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.751 [2024-07-26 02:09:27.702255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.751 [2024-07-26 02:09:27.702287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.751 qpair failed and we were unable to recover it. 00:33:45.751 [2024-07-26 02:09:27.712053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.751 [2024-07-26 02:09:27.712171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.751 [2024-07-26 02:09:27.712209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.751 [2024-07-26 02:09:27.712225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.751 [2024-07-26 02:09:27.712240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.751 [2024-07-26 02:09:27.712276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.751 qpair failed and we were unable to recover it. 00:33:45.751 [2024-07-26 02:09:27.722090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.751 [2024-07-26 02:09:27.722251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.751 [2024-07-26 02:09:27.722279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.751 [2024-07-26 02:09:27.722296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.751 [2024-07-26 02:09:27.722310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.751 [2024-07-26 02:09:27.722342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.751 qpair failed and we were unable to recover it. 00:33:45.751 [2024-07-26 02:09:27.732127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.751 [2024-07-26 02:09:27.732287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.751 [2024-07-26 02:09:27.732314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.751 [2024-07-26 02:09:27.732331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.751 [2024-07-26 02:09:27.732345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.751 [2024-07-26 02:09:27.732375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.751 qpair failed and we were unable to recover it. 00:33:45.751 [2024-07-26 02:09:27.742144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.751 [2024-07-26 02:09:27.742263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.751 [2024-07-26 02:09:27.742288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.751 [2024-07-26 02:09:27.742304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.751 [2024-07-26 02:09:27.742318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.751 [2024-07-26 02:09:27.742350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.751 qpair failed and we were unable to recover it. 00:33:45.751 [2024-07-26 02:09:27.752256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.751 [2024-07-26 02:09:27.752387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.751 [2024-07-26 02:09:27.752416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.751 [2024-07-26 02:09:27.752434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.751 [2024-07-26 02:09:27.752449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:45.751 [2024-07-26 02:09:27.752480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.751 qpair failed and we were unable to recover it. 00:33:46.013 [2024-07-26 02:09:27.762303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.013 [2024-07-26 02:09:27.762423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.014 [2024-07-26 02:09:27.762448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.014 [2024-07-26 02:09:27.762464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.014 [2024-07-26 02:09:27.762478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.014 [2024-07-26 02:09:27.762509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.014 qpair failed and we were unable to recover it. 00:33:46.014 [2024-07-26 02:09:27.772241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.014 [2024-07-26 02:09:27.772353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.014 [2024-07-26 02:09:27.772379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.014 [2024-07-26 02:09:27.772395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.014 [2024-07-26 02:09:27.772409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.014 [2024-07-26 02:09:27.772440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.014 qpair failed and we were unable to recover it. 00:33:46.014 [2024-07-26 02:09:27.782263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.014 [2024-07-26 02:09:27.782367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.014 [2024-07-26 02:09:27.782392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.014 [2024-07-26 02:09:27.782407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.014 [2024-07-26 02:09:27.782421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.014 [2024-07-26 02:09:27.782452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.014 qpair failed and we were unable to recover it. 00:33:46.014 [2024-07-26 02:09:27.792302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.014 [2024-07-26 02:09:27.792410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.014 [2024-07-26 02:09:27.792439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.014 [2024-07-26 02:09:27.792456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.014 [2024-07-26 02:09:27.792470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.014 [2024-07-26 02:09:27.792513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.014 qpair failed and we were unable to recover it. 00:33:46.014 [2024-07-26 02:09:27.802344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.014 [2024-07-26 02:09:27.802463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.014 [2024-07-26 02:09:27.802489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.014 [2024-07-26 02:09:27.802505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.014 [2024-07-26 02:09:27.802524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.014 [2024-07-26 02:09:27.802557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.014 qpair failed and we were unable to recover it. 00:33:46.014 [2024-07-26 02:09:27.812434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.014 [2024-07-26 02:09:27.812545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.014 [2024-07-26 02:09:27.812571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.014 [2024-07-26 02:09:27.812587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.014 [2024-07-26 02:09:27.812601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.014 [2024-07-26 02:09:27.812633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.014 qpair failed and we were unable to recover it. 00:33:46.014 [2024-07-26 02:09:27.822447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.014 [2024-07-26 02:09:27.822557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.014 [2024-07-26 02:09:27.822583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.014 [2024-07-26 02:09:27.822599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.014 [2024-07-26 02:09:27.822613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.014 [2024-07-26 02:09:27.822644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.014 qpair failed and we were unable to recover it. 00:33:46.014 [2024-07-26 02:09:27.832427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.014 [2024-07-26 02:09:27.832557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.014 [2024-07-26 02:09:27.832585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.014 [2024-07-26 02:09:27.832601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.014 [2024-07-26 02:09:27.832616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.014 [2024-07-26 02:09:27.832647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.014 qpair failed and we were unable to recover it. 00:33:46.014 [2024-07-26 02:09:27.842540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.014 [2024-07-26 02:09:27.842675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.014 [2024-07-26 02:09:27.842702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.014 [2024-07-26 02:09:27.842718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.014 [2024-07-26 02:09:27.842732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.014 [2024-07-26 02:09:27.842763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.014 qpair failed and we were unable to recover it. 00:33:46.014 [2024-07-26 02:09:27.852443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.014 [2024-07-26 02:09:27.852553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.014 [2024-07-26 02:09:27.852578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.014 [2024-07-26 02:09:27.852594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.014 [2024-07-26 02:09:27.852608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.014 [2024-07-26 02:09:27.852639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.014 qpair failed and we were unable to recover it. 00:33:46.014 [2024-07-26 02:09:27.862474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.014 [2024-07-26 02:09:27.862581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.014 [2024-07-26 02:09:27.862607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.014 [2024-07-26 02:09:27.862622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.014 [2024-07-26 02:09:27.862637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.014 [2024-07-26 02:09:27.862668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.014 qpair failed and we were unable to recover it. 00:33:46.014 [2024-07-26 02:09:27.872544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.014 [2024-07-26 02:09:27.872686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.014 [2024-07-26 02:09:27.872713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.014 [2024-07-26 02:09:27.872730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.014 [2024-07-26 02:09:27.872744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.014 [2024-07-26 02:09:27.872774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.014 qpair failed and we were unable to recover it. 00:33:46.014 [2024-07-26 02:09:27.882577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.015 [2024-07-26 02:09:27.882691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.015 [2024-07-26 02:09:27.882717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.015 [2024-07-26 02:09:27.882734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.015 [2024-07-26 02:09:27.882748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.015 [2024-07-26 02:09:27.882779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.015 qpair failed and we were unable to recover it. 00:33:46.015 [2024-07-26 02:09:27.892578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.015 [2024-07-26 02:09:27.892694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.015 [2024-07-26 02:09:27.892720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.015 [2024-07-26 02:09:27.892743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.015 [2024-07-26 02:09:27.892759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.015 [2024-07-26 02:09:27.892791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.015 qpair failed and we were unable to recover it. 00:33:46.015 [2024-07-26 02:09:27.902611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.015 [2024-07-26 02:09:27.902716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.015 [2024-07-26 02:09:27.902741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.015 [2024-07-26 02:09:27.902757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.015 [2024-07-26 02:09:27.902771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.015 [2024-07-26 02:09:27.902802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.015 qpair failed and we were unable to recover it. 00:33:46.015 [2024-07-26 02:09:27.912617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.015 [2024-07-26 02:09:27.912746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.015 [2024-07-26 02:09:27.912771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.015 [2024-07-26 02:09:27.912787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.015 [2024-07-26 02:09:27.912802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.015 [2024-07-26 02:09:27.912834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.015 qpair failed and we were unable to recover it. 00:33:46.015 [2024-07-26 02:09:27.922632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.015 [2024-07-26 02:09:27.922744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.015 [2024-07-26 02:09:27.922770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.015 [2024-07-26 02:09:27.922786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.015 [2024-07-26 02:09:27.922799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.015 [2024-07-26 02:09:27.922830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.015 qpair failed and we were unable to recover it. 00:33:46.015 [2024-07-26 02:09:27.932676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.015 [2024-07-26 02:09:27.932783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.015 [2024-07-26 02:09:27.932809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.015 [2024-07-26 02:09:27.932824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.015 [2024-07-26 02:09:27.932838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.015 [2024-07-26 02:09:27.932869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.015 qpair failed and we were unable to recover it. 00:33:46.015 [2024-07-26 02:09:27.942796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.015 [2024-07-26 02:09:27.942927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.015 [2024-07-26 02:09:27.942955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.015 [2024-07-26 02:09:27.942970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.015 [2024-07-26 02:09:27.942985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.015 [2024-07-26 02:09:27.943015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.015 qpair failed and we were unable to recover it. 00:33:46.015 [2024-07-26 02:09:27.952827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.015 [2024-07-26 02:09:27.952963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.015 [2024-07-26 02:09:27.952990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.015 [2024-07-26 02:09:27.953006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.015 [2024-07-26 02:09:27.953020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.015 [2024-07-26 02:09:27.953051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.015 qpair failed and we were unable to recover it. 00:33:46.015 [2024-07-26 02:09:27.962781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.015 [2024-07-26 02:09:27.962895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.015 [2024-07-26 02:09:27.962922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.015 [2024-07-26 02:09:27.962938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.015 [2024-07-26 02:09:27.962952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.015 [2024-07-26 02:09:27.962984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.015 qpair failed and we were unable to recover it. 00:33:46.015 [2024-07-26 02:09:27.972798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.015 [2024-07-26 02:09:27.972912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.015 [2024-07-26 02:09:27.972937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.015 [2024-07-26 02:09:27.972952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.015 [2024-07-26 02:09:27.972967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.015 [2024-07-26 02:09:27.972997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.015 qpair failed and we were unable to recover it. 00:33:46.015 [2024-07-26 02:09:27.982847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.015 [2024-07-26 02:09:27.982957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.015 [2024-07-26 02:09:27.982988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.015 [2024-07-26 02:09:27.983005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.015 [2024-07-26 02:09:27.983019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.015 [2024-07-26 02:09:27.983051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.015 qpair failed and we were unable to recover it. 00:33:46.015 [2024-07-26 02:09:27.992859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.015 [2024-07-26 02:09:27.992971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.015 [2024-07-26 02:09:27.992999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.015 [2024-07-26 02:09:27.993014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.015 [2024-07-26 02:09:27.993029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.015 [2024-07-26 02:09:27.993067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.015 qpair failed and we were unable to recover it. 00:33:46.015 [2024-07-26 02:09:28.002928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.016 [2024-07-26 02:09:28.003043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.016 [2024-07-26 02:09:28.003078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.016 [2024-07-26 02:09:28.003095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.016 [2024-07-26 02:09:28.003109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.016 [2024-07-26 02:09:28.003140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.016 qpair failed and we were unable to recover it. 00:33:46.016 [2024-07-26 02:09:28.013000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.016 [2024-07-26 02:09:28.013119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.016 [2024-07-26 02:09:28.013145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.016 [2024-07-26 02:09:28.013161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.016 [2024-07-26 02:09:28.013175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.016 [2024-07-26 02:09:28.013208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.016 qpair failed and we were unable to recover it. 00:33:46.016 [2024-07-26 02:09:28.022964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.016 [2024-07-26 02:09:28.023083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.016 [2024-07-26 02:09:28.023109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.016 [2024-07-26 02:09:28.023124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.016 [2024-07-26 02:09:28.023138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.016 [2024-07-26 02:09:28.023175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.016 qpair failed and we were unable to recover it. 00:33:46.278 [2024-07-26 02:09:28.032972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.278 [2024-07-26 02:09:28.033126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.278 [2024-07-26 02:09:28.033153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.278 [2024-07-26 02:09:28.033170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.278 [2024-07-26 02:09:28.033184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.278 [2024-07-26 02:09:28.033216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.278 qpair failed and we were unable to recover it. 00:33:46.278 [2024-07-26 02:09:28.043051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.278 [2024-07-26 02:09:28.043173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.278 [2024-07-26 02:09:28.043198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.278 [2024-07-26 02:09:28.043214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.278 [2024-07-26 02:09:28.043227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.278 [2024-07-26 02:09:28.043259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.278 qpair failed and we were unable to recover it. 00:33:46.278 [2024-07-26 02:09:28.053053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.278 [2024-07-26 02:09:28.053176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.278 [2024-07-26 02:09:28.053202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.278 [2024-07-26 02:09:28.053217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.278 [2024-07-26 02:09:28.053231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.278 [2024-07-26 02:09:28.053262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.278 qpair failed and we were unable to recover it. 00:33:46.278 [2024-07-26 02:09:28.063082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.278 [2024-07-26 02:09:28.063233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.278 [2024-07-26 02:09:28.063261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.278 [2024-07-26 02:09:28.063277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.278 [2024-07-26 02:09:28.063291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.278 [2024-07-26 02:09:28.063323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.278 qpair failed and we were unable to recover it. 00:33:46.278 [2024-07-26 02:09:28.073145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.278 [2024-07-26 02:09:28.073293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.278 [2024-07-26 02:09:28.073327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.278 [2024-07-26 02:09:28.073344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.278 [2024-07-26 02:09:28.073358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.278 [2024-07-26 02:09:28.073401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.278 qpair failed and we were unable to recover it. 00:33:46.278 [2024-07-26 02:09:28.083165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.278 [2024-07-26 02:09:28.083292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.278 [2024-07-26 02:09:28.083320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.278 [2024-07-26 02:09:28.083336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.278 [2024-07-26 02:09:28.083350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.278 [2024-07-26 02:09:28.083381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.278 qpair failed and we were unable to recover it. 00:33:46.278 [2024-07-26 02:09:28.093187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.278 [2024-07-26 02:09:28.093301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.278 [2024-07-26 02:09:28.093328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.278 [2024-07-26 02:09:28.093343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.278 [2024-07-26 02:09:28.093358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.278 [2024-07-26 02:09:28.093389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.278 qpair failed and we were unable to recover it. 00:33:46.278 [2024-07-26 02:09:28.103204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.278 [2024-07-26 02:09:28.103317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.278 [2024-07-26 02:09:28.103343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.278 [2024-07-26 02:09:28.103358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.279 [2024-07-26 02:09:28.103373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.279 [2024-07-26 02:09:28.103404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.279 qpair failed and we were unable to recover it. 00:33:46.279 [2024-07-26 02:09:28.113237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.279 [2024-07-26 02:09:28.113353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.279 [2024-07-26 02:09:28.113379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.279 [2024-07-26 02:09:28.113395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.279 [2024-07-26 02:09:28.113409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.279 [2024-07-26 02:09:28.113446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.279 qpair failed and we were unable to recover it. 00:33:46.279 [2024-07-26 02:09:28.123232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.279 [2024-07-26 02:09:28.123385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.279 [2024-07-26 02:09:28.123412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.279 [2024-07-26 02:09:28.123429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.279 [2024-07-26 02:09:28.123443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.279 [2024-07-26 02:09:28.123474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.279 qpair failed and we were unable to recover it. 00:33:46.279 [2024-07-26 02:09:28.133350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.279 [2024-07-26 02:09:28.133461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.279 [2024-07-26 02:09:28.133486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.279 [2024-07-26 02:09:28.133502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.279 [2024-07-26 02:09:28.133516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.279 [2024-07-26 02:09:28.133547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.279 qpair failed and we were unable to recover it. 00:33:46.279 [2024-07-26 02:09:28.143296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.279 [2024-07-26 02:09:28.143424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.279 [2024-07-26 02:09:28.143460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.279 [2024-07-26 02:09:28.143476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.279 [2024-07-26 02:09:28.143489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.279 [2024-07-26 02:09:28.143521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.279 qpair failed and we were unable to recover it. 00:33:46.279 [2024-07-26 02:09:28.153533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.279 [2024-07-26 02:09:28.153657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.279 [2024-07-26 02:09:28.153684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.279 [2024-07-26 02:09:28.153700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.279 [2024-07-26 02:09:28.153714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.279 [2024-07-26 02:09:28.153745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.279 qpair failed and we were unable to recover it. 00:33:46.279 [2024-07-26 02:09:28.163435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.279 [2024-07-26 02:09:28.163608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.279 [2024-07-26 02:09:28.163661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.279 [2024-07-26 02:09:28.163680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.279 [2024-07-26 02:09:28.163695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.279 [2024-07-26 02:09:28.163754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.279 qpair failed and we were unable to recover it. 00:33:46.279 [2024-07-26 02:09:28.173420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.279 [2024-07-26 02:09:28.173533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.279 [2024-07-26 02:09:28.173558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.279 [2024-07-26 02:09:28.173573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.279 [2024-07-26 02:09:28.173588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.279 [2024-07-26 02:09:28.173619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.279 qpair failed and we were unable to recover it. 00:33:46.279 [2024-07-26 02:09:28.183460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.279 [2024-07-26 02:09:28.183597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.279 [2024-07-26 02:09:28.183625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.279 [2024-07-26 02:09:28.183642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.279 [2024-07-26 02:09:28.183656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.279 [2024-07-26 02:09:28.183700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.279 qpair failed and we were unable to recover it. 00:33:46.279 [2024-07-26 02:09:28.193438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.279 [2024-07-26 02:09:28.193547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.279 [2024-07-26 02:09:28.193572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.279 [2024-07-26 02:09:28.193588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.279 [2024-07-26 02:09:28.193602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.279 [2024-07-26 02:09:28.193633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.279 qpair failed and we were unable to recover it. 00:33:46.279 [2024-07-26 02:09:28.203547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.279 [2024-07-26 02:09:28.203665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.279 [2024-07-26 02:09:28.203691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.279 [2024-07-26 02:09:28.203707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.279 [2024-07-26 02:09:28.203730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.279 [2024-07-26 02:09:28.203778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.279 qpair failed and we were unable to recover it. 00:33:46.279 [2024-07-26 02:09:28.213498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.279 [2024-07-26 02:09:28.213607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.279 [2024-07-26 02:09:28.213632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.279 [2024-07-26 02:09:28.213648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.279 [2024-07-26 02:09:28.213662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.279 [2024-07-26 02:09:28.213693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.279 qpair failed and we were unable to recover it. 00:33:46.279 [2024-07-26 02:09:28.223546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.279 [2024-07-26 02:09:28.223658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.279 [2024-07-26 02:09:28.223683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.279 [2024-07-26 02:09:28.223699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.280 [2024-07-26 02:09:28.223714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.280 [2024-07-26 02:09:28.223745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.280 qpair failed and we were unable to recover it. 00:33:46.280 [2024-07-26 02:09:28.233568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.280 [2024-07-26 02:09:28.233673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.280 [2024-07-26 02:09:28.233698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.280 [2024-07-26 02:09:28.233714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.280 [2024-07-26 02:09:28.233729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.280 [2024-07-26 02:09:28.233771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.280 qpair failed and we were unable to recover it. 00:33:46.280 [2024-07-26 02:09:28.243658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.280 [2024-07-26 02:09:28.243780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.280 [2024-07-26 02:09:28.243806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.280 [2024-07-26 02:09:28.243821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.280 [2024-07-26 02:09:28.243837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.280 [2024-07-26 02:09:28.243870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.280 qpair failed and we were unable to recover it. 00:33:46.280 [2024-07-26 02:09:28.253613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.280 [2024-07-26 02:09:28.253728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.280 [2024-07-26 02:09:28.253754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.280 [2024-07-26 02:09:28.253770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.280 [2024-07-26 02:09:28.253785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.280 [2024-07-26 02:09:28.253816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.280 qpair failed and we were unable to recover it. 00:33:46.280 [2024-07-26 02:09:28.263626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.280 [2024-07-26 02:09:28.263729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.280 [2024-07-26 02:09:28.263755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.280 [2024-07-26 02:09:28.263770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.280 [2024-07-26 02:09:28.263784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.280 [2024-07-26 02:09:28.263815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.280 qpair failed and we were unable to recover it. 00:33:46.280 [2024-07-26 02:09:28.273675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.280 [2024-07-26 02:09:28.273797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.280 [2024-07-26 02:09:28.273823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.280 [2024-07-26 02:09:28.273839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.280 [2024-07-26 02:09:28.273853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.280 [2024-07-26 02:09:28.273885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.280 qpair failed and we were unable to recover it. 00:33:46.280 [2024-07-26 02:09:28.283709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.280 [2024-07-26 02:09:28.283824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.280 [2024-07-26 02:09:28.283849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.280 [2024-07-26 02:09:28.283864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.280 [2024-07-26 02:09:28.283879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.280 [2024-07-26 02:09:28.283910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.280 qpair failed and we were unable to recover it. 00:33:46.540 [2024-07-26 02:09:28.293712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.540 [2024-07-26 02:09:28.293823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.540 [2024-07-26 02:09:28.293849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.540 [2024-07-26 02:09:28.293871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.540 [2024-07-26 02:09:28.293886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.540 [2024-07-26 02:09:28.293918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.540 qpair failed and we were unable to recover it. 00:33:46.540 [2024-07-26 02:09:28.303769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.540 [2024-07-26 02:09:28.303890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.540 [2024-07-26 02:09:28.303916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.540 [2024-07-26 02:09:28.303931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.540 [2024-07-26 02:09:28.303946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.540 [2024-07-26 02:09:28.303979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.540 qpair failed and we were unable to recover it. 00:33:46.540 [2024-07-26 02:09:28.313782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.540 [2024-07-26 02:09:28.313886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.540 [2024-07-26 02:09:28.313912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.540 [2024-07-26 02:09:28.313928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.540 [2024-07-26 02:09:28.313942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.540 [2024-07-26 02:09:28.313975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.540 qpair failed and we were unable to recover it. 00:33:46.540 [2024-07-26 02:09:28.323852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.540 [2024-07-26 02:09:28.323971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.540 [2024-07-26 02:09:28.323997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.540 [2024-07-26 02:09:28.324013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.540 [2024-07-26 02:09:28.324028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.540 [2024-07-26 02:09:28.324066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.540 qpair failed and we were unable to recover it. 00:33:46.540 [2024-07-26 02:09:28.333863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.540 [2024-07-26 02:09:28.334016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.540 [2024-07-26 02:09:28.334044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.540 [2024-07-26 02:09:28.334068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.540 [2024-07-26 02:09:28.334085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.540 [2024-07-26 02:09:28.334117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.540 qpair failed and we were unable to recover it. 00:33:46.540 [2024-07-26 02:09:28.343957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.540 [2024-07-26 02:09:28.344072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.540 [2024-07-26 02:09:28.344098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.540 [2024-07-26 02:09:28.344113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.540 [2024-07-26 02:09:28.344127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.540 [2024-07-26 02:09:28.344158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.540 qpair failed and we were unable to recover it. 00:33:46.540 [2024-07-26 02:09:28.353906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.540 [2024-07-26 02:09:28.354026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.540 [2024-07-26 02:09:28.354054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.540 [2024-07-26 02:09:28.354083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.540 [2024-07-26 02:09:28.354100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.540 [2024-07-26 02:09:28.354132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.540 qpair failed and we were unable to recover it. 00:33:46.540 [2024-07-26 02:09:28.364050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.540 [2024-07-26 02:09:28.364180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.540 [2024-07-26 02:09:28.364206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.540 [2024-07-26 02:09:28.364222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.540 [2024-07-26 02:09:28.364236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.540 [2024-07-26 02:09:28.364268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.540 qpair failed and we were unable to recover it. 00:33:46.540 [2024-07-26 02:09:28.373965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.540 [2024-07-26 02:09:28.374083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.540 [2024-07-26 02:09:28.374109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.541 [2024-07-26 02:09:28.374125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.541 [2024-07-26 02:09:28.374139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.541 [2024-07-26 02:09:28.374170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.541 qpair failed and we were unable to recover it. 00:33:46.541 [2024-07-26 02:09:28.383983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.541 [2024-07-26 02:09:28.384095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.541 [2024-07-26 02:09:28.384122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.541 [2024-07-26 02:09:28.384143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.541 [2024-07-26 02:09:28.384158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.541 [2024-07-26 02:09:28.384202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.541 qpair failed and we were unable to recover it. 00:33:46.541 [2024-07-26 02:09:28.394014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.541 [2024-07-26 02:09:28.394132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.541 [2024-07-26 02:09:28.394159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.541 [2024-07-26 02:09:28.394175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.541 [2024-07-26 02:09:28.394190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.541 [2024-07-26 02:09:28.394221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.541 qpair failed and we were unable to recover it. 00:33:46.541 [2024-07-26 02:09:28.404078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.541 [2024-07-26 02:09:28.404194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.541 [2024-07-26 02:09:28.404219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.541 [2024-07-26 02:09:28.404235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.541 [2024-07-26 02:09:28.404250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.541 [2024-07-26 02:09:28.404282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.541 qpair failed and we were unable to recover it. 00:33:46.541 [2024-07-26 02:09:28.414068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.541 [2024-07-26 02:09:28.414173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.541 [2024-07-26 02:09:28.414199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.541 [2024-07-26 02:09:28.414214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.541 [2024-07-26 02:09:28.414228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.541 [2024-07-26 02:09:28.414259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.541 qpair failed and we were unable to recover it. 00:33:46.541 [2024-07-26 02:09:28.424113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.541 [2024-07-26 02:09:28.424225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.541 [2024-07-26 02:09:28.424251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.541 [2024-07-26 02:09:28.424267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.541 [2024-07-26 02:09:28.424285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.541 [2024-07-26 02:09:28.424318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.541 qpair failed and we were unable to recover it. 00:33:46.541 [2024-07-26 02:09:28.434158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.541 [2024-07-26 02:09:28.434273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.541 [2024-07-26 02:09:28.434299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.541 [2024-07-26 02:09:28.434314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.541 [2024-07-26 02:09:28.434328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.541 [2024-07-26 02:09:28.434371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.541 qpair failed and we were unable to recover it. 00:33:46.541 [2024-07-26 02:09:28.444295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.541 [2024-07-26 02:09:28.444422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.541 [2024-07-26 02:09:28.444462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.541 [2024-07-26 02:09:28.444478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.541 [2024-07-26 02:09:28.444492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.541 [2024-07-26 02:09:28.444524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.541 qpair failed and we were unable to recover it. 00:33:46.541 [2024-07-26 02:09:28.454188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.541 [2024-07-26 02:09:28.454304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.541 [2024-07-26 02:09:28.454331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.541 [2024-07-26 02:09:28.454348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.541 [2024-07-26 02:09:28.454363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.541 [2024-07-26 02:09:28.454395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.541 qpair failed and we were unable to recover it. 00:33:46.541 [2024-07-26 02:09:28.464306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.541 [2024-07-26 02:09:28.464420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.541 [2024-07-26 02:09:28.464461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.541 [2024-07-26 02:09:28.464477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.541 [2024-07-26 02:09:28.464491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.541 [2024-07-26 02:09:28.464537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.541 qpair failed and we were unable to recover it. 00:33:46.541 [2024-07-26 02:09:28.474234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.541 [2024-07-26 02:09:28.474342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.541 [2024-07-26 02:09:28.474374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.541 [2024-07-26 02:09:28.474392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.541 [2024-07-26 02:09:28.474407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.541 [2024-07-26 02:09:28.474438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.541 qpair failed and we were unable to recover it. 00:33:46.541 [2024-07-26 02:09:28.484271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.541 [2024-07-26 02:09:28.484383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.541 [2024-07-26 02:09:28.484411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.541 [2024-07-26 02:09:28.484427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.541 [2024-07-26 02:09:28.484442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.541 [2024-07-26 02:09:28.484485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.541 qpair failed and we were unable to recover it. 00:33:46.541 [2024-07-26 02:09:28.494295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.541 [2024-07-26 02:09:28.494406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.541 [2024-07-26 02:09:28.494433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.542 [2024-07-26 02:09:28.494450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.542 [2024-07-26 02:09:28.494465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.542 [2024-07-26 02:09:28.494496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.542 qpair failed and we were unable to recover it. 00:33:46.542 [2024-07-26 02:09:28.504325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.542 [2024-07-26 02:09:28.504426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.542 [2024-07-26 02:09:28.504452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.542 [2024-07-26 02:09:28.504468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.542 [2024-07-26 02:09:28.504482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.542 [2024-07-26 02:09:28.504512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.542 qpair failed and we were unable to recover it. 00:33:46.542 [2024-07-26 02:09:28.514358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.542 [2024-07-26 02:09:28.514461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.542 [2024-07-26 02:09:28.514487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.542 [2024-07-26 02:09:28.514502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.542 [2024-07-26 02:09:28.514516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.542 [2024-07-26 02:09:28.514551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.542 qpair failed and we were unable to recover it. 00:33:46.542 [2024-07-26 02:09:28.524421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.542 [2024-07-26 02:09:28.524536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.542 [2024-07-26 02:09:28.524563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.542 [2024-07-26 02:09:28.524579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.542 [2024-07-26 02:09:28.524594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.542 [2024-07-26 02:09:28.524624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.542 qpair failed and we were unable to recover it. 00:33:46.542 [2024-07-26 02:09:28.534429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.542 [2024-07-26 02:09:28.534560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.542 [2024-07-26 02:09:28.534587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.542 [2024-07-26 02:09:28.534603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.542 [2024-07-26 02:09:28.534618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.542 [2024-07-26 02:09:28.534650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.542 qpair failed and we were unable to recover it. 00:33:46.542 [2024-07-26 02:09:28.544529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.542 [2024-07-26 02:09:28.544649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.542 [2024-07-26 02:09:28.544676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.542 [2024-07-26 02:09:28.544693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.542 [2024-07-26 02:09:28.544708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.542 [2024-07-26 02:09:28.544740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.542 qpair failed and we were unable to recover it. 00:33:46.801 [2024-07-26 02:09:28.554458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.801 [2024-07-26 02:09:28.554568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.801 [2024-07-26 02:09:28.554595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.801 [2024-07-26 02:09:28.554612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.801 [2024-07-26 02:09:28.554628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.801 [2024-07-26 02:09:28.554659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.801 qpair failed and we were unable to recover it. 00:33:46.801 [2024-07-26 02:09:28.564511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.801 [2024-07-26 02:09:28.564624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.801 [2024-07-26 02:09:28.564656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.801 [2024-07-26 02:09:28.564672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.801 [2024-07-26 02:09:28.564687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.802 [2024-07-26 02:09:28.564719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.802 qpair failed and we were unable to recover it. 00:33:46.802 [2024-07-26 02:09:28.574558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.802 [2024-07-26 02:09:28.574674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.802 [2024-07-26 02:09:28.574701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.802 [2024-07-26 02:09:28.574717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.802 [2024-07-26 02:09:28.574730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.802 [2024-07-26 02:09:28.574762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.802 qpair failed and we were unable to recover it. 00:33:46.802 [2024-07-26 02:09:28.584573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.802 [2024-07-26 02:09:28.584688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.802 [2024-07-26 02:09:28.584714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.802 [2024-07-26 02:09:28.584730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.802 [2024-07-26 02:09:28.584745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.802 [2024-07-26 02:09:28.584776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.802 qpair failed and we were unable to recover it. 00:33:46.802 [2024-07-26 02:09:28.594568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.802 [2024-07-26 02:09:28.594681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.802 [2024-07-26 02:09:28.594708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.802 [2024-07-26 02:09:28.594723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.802 [2024-07-26 02:09:28.594738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.802 [2024-07-26 02:09:28.594768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.802 qpair failed and we were unable to recover it. 00:33:46.802 [2024-07-26 02:09:28.604668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.802 [2024-07-26 02:09:28.604836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.802 [2024-07-26 02:09:28.604863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.802 [2024-07-26 02:09:28.604894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.802 [2024-07-26 02:09:28.604915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.802 [2024-07-26 02:09:28.604961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.802 qpair failed and we were unable to recover it. 00:33:46.802 [2024-07-26 02:09:28.614613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.802 [2024-07-26 02:09:28.614726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.802 [2024-07-26 02:09:28.614753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.802 [2024-07-26 02:09:28.614769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.802 [2024-07-26 02:09:28.614784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.802 [2024-07-26 02:09:28.614814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.802 qpair failed and we were unable to recover it. 00:33:46.802 [2024-07-26 02:09:28.624684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.802 [2024-07-26 02:09:28.624797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.802 [2024-07-26 02:09:28.624824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.802 [2024-07-26 02:09:28.624839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.802 [2024-07-26 02:09:28.624854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.802 [2024-07-26 02:09:28.624884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.802 qpair failed and we were unable to recover it. 00:33:46.802 [2024-07-26 02:09:28.634728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.802 [2024-07-26 02:09:28.634886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.802 [2024-07-26 02:09:28.634912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.802 [2024-07-26 02:09:28.634928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.802 [2024-07-26 02:09:28.634943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.802 [2024-07-26 02:09:28.634974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.802 qpair failed and we were unable to recover it. 00:33:46.802 [2024-07-26 02:09:28.644739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.802 [2024-07-26 02:09:28.644860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.802 [2024-07-26 02:09:28.644886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.802 [2024-07-26 02:09:28.644902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.802 [2024-07-26 02:09:28.644917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.802 [2024-07-26 02:09:28.644947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.802 qpair failed and we were unable to recover it. 00:33:46.802 [2024-07-26 02:09:28.654754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.802 [2024-07-26 02:09:28.654880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.802 [2024-07-26 02:09:28.654907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.802 [2024-07-26 02:09:28.654923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.802 [2024-07-26 02:09:28.654938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.802 [2024-07-26 02:09:28.654969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.802 qpair failed and we were unable to recover it. 00:33:46.802 [2024-07-26 02:09:28.664826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.802 [2024-07-26 02:09:28.664936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.802 [2024-07-26 02:09:28.664963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.802 [2024-07-26 02:09:28.664978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.802 [2024-07-26 02:09:28.664993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.802 [2024-07-26 02:09:28.665025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.802 qpair failed and we were unable to recover it. 00:33:46.802 [2024-07-26 02:09:28.674803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.802 [2024-07-26 02:09:28.674915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.802 [2024-07-26 02:09:28.674941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.802 [2024-07-26 02:09:28.674957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.802 [2024-07-26 02:09:28.674971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.802 [2024-07-26 02:09:28.675002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.802 qpair failed and we were unable to recover it. 00:33:46.802 [2024-07-26 02:09:28.684970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.802 [2024-07-26 02:09:28.685104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.802 [2024-07-26 02:09:28.685131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.802 [2024-07-26 02:09:28.685147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.802 [2024-07-26 02:09:28.685161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.802 [2024-07-26 02:09:28.685192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.802 qpair failed and we were unable to recover it. 00:33:46.802 [2024-07-26 02:09:28.694875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.802 [2024-07-26 02:09:28.694996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.802 [2024-07-26 02:09:28.695023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.802 [2024-07-26 02:09:28.695044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.802 [2024-07-26 02:09:28.695070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.802 [2024-07-26 02:09:28.695105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.802 qpair failed and we were unable to recover it. 00:33:46.802 [2024-07-26 02:09:28.704889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.802 [2024-07-26 02:09:28.704996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.802 [2024-07-26 02:09:28.705022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.802 [2024-07-26 02:09:28.705038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.803 [2024-07-26 02:09:28.705053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.803 [2024-07-26 02:09:28.705093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.803 qpair failed and we were unable to recover it. 00:33:46.803 [2024-07-26 02:09:28.714924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.803 [2024-07-26 02:09:28.715036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.803 [2024-07-26 02:09:28.715072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.803 [2024-07-26 02:09:28.715091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.803 [2024-07-26 02:09:28.715106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.803 [2024-07-26 02:09:28.715137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.803 qpair failed and we were unable to recover it. 00:33:46.803 [2024-07-26 02:09:28.724996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.803 [2024-07-26 02:09:28.725132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.803 [2024-07-26 02:09:28.725166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.803 [2024-07-26 02:09:28.725187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.803 [2024-07-26 02:09:28.725202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.803 [2024-07-26 02:09:28.725236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.803 qpair failed and we were unable to recover it. 00:33:46.803 [2024-07-26 02:09:28.734975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.803 [2024-07-26 02:09:28.735104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.803 [2024-07-26 02:09:28.735132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.803 [2024-07-26 02:09:28.735148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.803 [2024-07-26 02:09:28.735163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.803 [2024-07-26 02:09:28.735194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.803 qpair failed and we were unable to recover it. 00:33:46.803 [2024-07-26 02:09:28.745038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.803 [2024-07-26 02:09:28.745200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.803 [2024-07-26 02:09:28.745228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.803 [2024-07-26 02:09:28.745244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.803 [2024-07-26 02:09:28.745258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.803 [2024-07-26 02:09:28.745288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.803 qpair failed and we were unable to recover it. 00:33:46.803 [2024-07-26 02:09:28.755032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.803 [2024-07-26 02:09:28.755225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.803 [2024-07-26 02:09:28.755252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.803 [2024-07-26 02:09:28.755267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.803 [2024-07-26 02:09:28.755282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.803 [2024-07-26 02:09:28.755314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.803 qpair failed and we were unable to recover it. 00:33:46.803 [2024-07-26 02:09:28.765150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.803 [2024-07-26 02:09:28.765264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.803 [2024-07-26 02:09:28.765290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.803 [2024-07-26 02:09:28.765306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.803 [2024-07-26 02:09:28.765320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.803 [2024-07-26 02:09:28.765351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.803 qpair failed and we were unable to recover it. 00:33:46.803 [2024-07-26 02:09:28.775112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.803 [2024-07-26 02:09:28.775228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.803 [2024-07-26 02:09:28.775255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.803 [2024-07-26 02:09:28.775271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.803 [2024-07-26 02:09:28.775285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.803 [2024-07-26 02:09:28.775318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.803 qpair failed and we were unable to recover it. 00:33:46.803 [2024-07-26 02:09:28.785156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.803 [2024-07-26 02:09:28.785276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.803 [2024-07-26 02:09:28.785303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.803 [2024-07-26 02:09:28.785325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.803 [2024-07-26 02:09:28.785342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.803 [2024-07-26 02:09:28.785373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.803 qpair failed and we were unable to recover it. 00:33:46.803 [2024-07-26 02:09:28.795150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.803 [2024-07-26 02:09:28.795256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.803 [2024-07-26 02:09:28.795282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.803 [2024-07-26 02:09:28.795298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.803 [2024-07-26 02:09:28.795313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.803 [2024-07-26 02:09:28.795344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.803 qpair failed and we were unable to recover it. 00:33:46.803 [2024-07-26 02:09:28.805228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.803 [2024-07-26 02:09:28.805338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.803 [2024-07-26 02:09:28.805376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.803 [2024-07-26 02:09:28.805391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.803 [2024-07-26 02:09:28.805407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:46.803 [2024-07-26 02:09:28.805439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.803 qpair failed and we were unable to recover it. 00:33:47.061 [2024-07-26 02:09:28.815215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.061 [2024-07-26 02:09:28.815322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.061 [2024-07-26 02:09:28.815349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.061 [2024-07-26 02:09:28.815365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.061 [2024-07-26 02:09:28.815383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.061 [2024-07-26 02:09:28.815426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.061 qpair failed and we were unable to recover it. 00:33:47.061 [2024-07-26 02:09:28.825279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.061 [2024-07-26 02:09:28.825395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.061 [2024-07-26 02:09:28.825421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.061 [2024-07-26 02:09:28.825437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.061 [2024-07-26 02:09:28.825452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.061 [2024-07-26 02:09:28.825484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.061 qpair failed and we were unable to recover it. 00:33:47.061 [2024-07-26 02:09:28.835267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.061 [2024-07-26 02:09:28.835379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.061 [2024-07-26 02:09:28.835405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.061 [2024-07-26 02:09:28.835428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.061 [2024-07-26 02:09:28.835443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.061 [2024-07-26 02:09:28.835474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.061 qpair failed and we were unable to recover it. 00:33:47.061 [2024-07-26 02:09:28.845450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.061 [2024-07-26 02:09:28.845564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.061 [2024-07-26 02:09:28.845605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.061 [2024-07-26 02:09:28.845620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.061 [2024-07-26 02:09:28.845635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.061 [2024-07-26 02:09:28.845679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.061 qpair failed and we were unable to recover it. 00:33:47.061 [2024-07-26 02:09:28.855312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.061 [2024-07-26 02:09:28.855427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.061 [2024-07-26 02:09:28.855453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.061 [2024-07-26 02:09:28.855469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.061 [2024-07-26 02:09:28.855484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.061 [2024-07-26 02:09:28.855516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.061 qpair failed and we were unable to recover it. 00:33:47.061 [2024-07-26 02:09:28.865347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.061 [2024-07-26 02:09:28.865479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.061 [2024-07-26 02:09:28.865505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.061 [2024-07-26 02:09:28.865521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.061 [2024-07-26 02:09:28.865536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.061 [2024-07-26 02:09:28.865582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.061 qpair failed and we were unable to recover it. 00:33:47.061 [2024-07-26 02:09:28.875380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.061 [2024-07-26 02:09:28.875495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.061 [2024-07-26 02:09:28.875531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.061 [2024-07-26 02:09:28.875552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.061 [2024-07-26 02:09:28.875568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.061 [2024-07-26 02:09:28.875625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.061 qpair failed and we were unable to recover it. 00:33:47.061 [2024-07-26 02:09:28.885457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.061 [2024-07-26 02:09:28.885577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.061 [2024-07-26 02:09:28.885604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.061 [2024-07-26 02:09:28.885620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.061 [2024-07-26 02:09:28.885635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.061 [2024-07-26 02:09:28.885666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.061 qpair failed and we were unable to recover it. 00:33:47.061 [2024-07-26 02:09:28.895423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:28.895527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:28.895553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:28.895569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:28.895583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:28.895614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:28.905491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:28.905604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:28.905631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:28.905647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:28.905661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:28.905692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:28.915552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:28.915661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:28.915688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:28.915704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:28.915719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:28.915756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:28.925588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:28.925709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:28.925735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:28.925751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:28.925765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:28.925795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:28.935570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:28.935693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:28.935719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:28.935735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:28.935750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:28.935781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:28.945565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:28.945708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:28.945733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:28.945749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:28.945764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:28.945795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:28.955625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:28.955766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:28.955794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:28.955811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:28.955826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:28.955872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:28.965689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:28.965837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:28.965869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:28.965885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:28.965916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:28.965947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:28.975674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:28.975800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:28.975826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:28.975841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:28.975855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:28.975885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:28.985695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:28.985806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:28.985833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:28.985850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:28.985866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:28.985909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:28.995729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:28.995842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:28.995868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:28.995884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:28.995899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:28.995930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:29.005746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:29.005864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:29.005890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:29.005906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:29.005926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:29.005958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:29.015768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:29.015882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:29.015908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:29.015925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:29.015939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:29.015970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:29.025801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:29.025906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:29.025933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:29.025949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:29.025963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:29.026007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:29.035846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:29.035968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:29.035996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:29.036012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:29.036027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:29.036066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:29.045890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:29.046016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:29.046042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:29.046057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:29.046081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:29.046114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:29.055972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:29.056110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:29.056146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:29.056165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:29.056181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:29.056215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.062 [2024-07-26 02:09:29.065917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.062 [2024-07-26 02:09:29.066027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.062 [2024-07-26 02:09:29.066055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.062 [2024-07-26 02:09:29.066080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.062 [2024-07-26 02:09:29.066096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.062 [2024-07-26 02:09:29.066128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.062 qpair failed and we were unable to recover it. 00:33:47.319 [2024-07-26 02:09:29.075938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.319 [2024-07-26 02:09:29.076047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.319 [2024-07-26 02:09:29.076082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.319 [2024-07-26 02:09:29.076111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.319 [2024-07-26 02:09:29.076126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.319 [2024-07-26 02:09:29.076157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.319 qpair failed and we were unable to recover it. 00:33:47.319 [2024-07-26 02:09:29.085982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.319 [2024-07-26 02:09:29.086107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.319 [2024-07-26 02:09:29.086134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.319 [2024-07-26 02:09:29.086150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.319 [2024-07-26 02:09:29.086164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.319 [2024-07-26 02:09:29.086196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.319 qpair failed and we were unable to recover it. 00:33:47.319 [2024-07-26 02:09:29.096001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.319 [2024-07-26 02:09:29.096116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.319 [2024-07-26 02:09:29.096142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.319 [2024-07-26 02:09:29.096157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.319 [2024-07-26 02:09:29.096177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.319 [2024-07-26 02:09:29.096210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.319 qpair failed and we were unable to recover it. 00:33:47.320 [2024-07-26 02:09:29.106088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.320 [2024-07-26 02:09:29.106233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.320 [2024-07-26 02:09:29.106259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.320 [2024-07-26 02:09:29.106275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.320 [2024-07-26 02:09:29.106290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.320 [2024-07-26 02:09:29.106334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.320 qpair failed and we were unable to recover it. 00:33:47.320 [2024-07-26 02:09:29.116052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.320 [2024-07-26 02:09:29.116172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.320 [2024-07-26 02:09:29.116200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.320 [2024-07-26 02:09:29.116216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.320 [2024-07-26 02:09:29.116232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.320 [2024-07-26 02:09:29.116264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.320 qpair failed and we were unable to recover it. 00:33:47.320 [2024-07-26 02:09:29.126104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.320 [2024-07-26 02:09:29.126227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.320 [2024-07-26 02:09:29.126254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.320 [2024-07-26 02:09:29.126270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.320 [2024-07-26 02:09:29.126285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.320 [2024-07-26 02:09:29.126328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.320 qpair failed and we were unable to recover it. 00:33:47.320 [2024-07-26 02:09:29.136124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.320 [2024-07-26 02:09:29.136251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.320 [2024-07-26 02:09:29.136278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.320 [2024-07-26 02:09:29.136294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.320 [2024-07-26 02:09:29.136309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.320 [2024-07-26 02:09:29.136340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.320 qpair failed and we were unable to recover it. 00:33:47.320 [2024-07-26 02:09:29.146129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.320 [2024-07-26 02:09:29.146240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.320 [2024-07-26 02:09:29.146267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.320 [2024-07-26 02:09:29.146283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.320 [2024-07-26 02:09:29.146298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.320 [2024-07-26 02:09:29.146328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.320 qpair failed and we were unable to recover it. 00:33:47.320 [2024-07-26 02:09:29.156210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.320 [2024-07-26 02:09:29.156320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.320 [2024-07-26 02:09:29.156347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.320 [2024-07-26 02:09:29.156363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.320 [2024-07-26 02:09:29.156378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.320 [2024-07-26 02:09:29.156408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.320 qpair failed and we were unable to recover it. 00:33:47.320 [2024-07-26 02:09:29.166207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.320 [2024-07-26 02:09:29.166335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.320 [2024-07-26 02:09:29.166363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.320 [2024-07-26 02:09:29.166379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.320 [2024-07-26 02:09:29.166394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.320 [2024-07-26 02:09:29.166425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.320 qpair failed and we were unable to recover it. 00:33:47.320 [2024-07-26 02:09:29.176251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.320 [2024-07-26 02:09:29.176364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.320 [2024-07-26 02:09:29.176392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.320 [2024-07-26 02:09:29.176408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.320 [2024-07-26 02:09:29.176423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.320 [2024-07-26 02:09:29.176468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.320 qpair failed and we were unable to recover it. 00:33:47.320 [2024-07-26 02:09:29.186319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.320 [2024-07-26 02:09:29.186434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.320 [2024-07-26 02:09:29.186461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.320 [2024-07-26 02:09:29.186483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.320 [2024-07-26 02:09:29.186499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.320 [2024-07-26 02:09:29.186546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.320 qpair failed and we were unable to recover it. 00:33:47.320 [2024-07-26 02:09:29.196284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.320 [2024-07-26 02:09:29.196399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.320 [2024-07-26 02:09:29.196426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.320 [2024-07-26 02:09:29.196442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.320 [2024-07-26 02:09:29.196457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.320 [2024-07-26 02:09:29.196488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.320 qpair failed and we were unable to recover it. 00:33:47.320 [2024-07-26 02:09:29.206327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.320 [2024-07-26 02:09:29.206446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.320 [2024-07-26 02:09:29.206472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.320 [2024-07-26 02:09:29.206488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.320 [2024-07-26 02:09:29.206503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.320 [2024-07-26 02:09:29.206536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.321 qpair failed and we were unable to recover it. 00:33:47.321 [2024-07-26 02:09:29.216327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.321 [2024-07-26 02:09:29.216444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.321 [2024-07-26 02:09:29.216472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.321 [2024-07-26 02:09:29.216488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.321 [2024-07-26 02:09:29.216503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.321 [2024-07-26 02:09:29.216534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.321 qpair failed and we were unable to recover it. 00:33:47.321 [2024-07-26 02:09:29.226378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.321 [2024-07-26 02:09:29.226486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.321 [2024-07-26 02:09:29.226512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.321 [2024-07-26 02:09:29.226529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.321 [2024-07-26 02:09:29.226544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.321 [2024-07-26 02:09:29.226575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.321 qpair failed and we were unable to recover it. 00:33:47.321 [2024-07-26 02:09:29.236387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.321 [2024-07-26 02:09:29.236500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.321 [2024-07-26 02:09:29.236526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.321 [2024-07-26 02:09:29.236542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.321 [2024-07-26 02:09:29.236557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.321 [2024-07-26 02:09:29.236589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.321 qpair failed and we were unable to recover it. 00:33:47.321 [2024-07-26 02:09:29.246419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.321 [2024-07-26 02:09:29.246528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.321 [2024-07-26 02:09:29.246554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.321 [2024-07-26 02:09:29.246569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.321 [2024-07-26 02:09:29.246584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.321 [2024-07-26 02:09:29.246616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.321 qpair failed and we were unable to recover it. 00:33:47.321 [2024-07-26 02:09:29.256451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.321 [2024-07-26 02:09:29.256573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.321 [2024-07-26 02:09:29.256599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.321 [2024-07-26 02:09:29.256615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.321 [2024-07-26 02:09:29.256630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.321 [2024-07-26 02:09:29.256662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.321 qpair failed and we were unable to recover it. 00:33:47.321 [2024-07-26 02:09:29.266506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.321 [2024-07-26 02:09:29.266637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.321 [2024-07-26 02:09:29.266665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.321 [2024-07-26 02:09:29.266682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.321 [2024-07-26 02:09:29.266701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.321 [2024-07-26 02:09:29.266749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.321 qpair failed and we were unable to recover it. 00:33:47.321 [2024-07-26 02:09:29.276529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.321 [2024-07-26 02:09:29.276644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.321 [2024-07-26 02:09:29.276675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.321 [2024-07-26 02:09:29.276692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.321 [2024-07-26 02:09:29.276707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.321 [2024-07-26 02:09:29.276740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.321 qpair failed and we were unable to recover it. 00:33:47.321 [2024-07-26 02:09:29.286634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.321 [2024-07-26 02:09:29.286759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.321 [2024-07-26 02:09:29.286785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.321 [2024-07-26 02:09:29.286801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.321 [2024-07-26 02:09:29.286816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.321 [2024-07-26 02:09:29.286863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.321 qpair failed and we were unable to recover it. 00:33:47.321 [2024-07-26 02:09:29.296568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.321 [2024-07-26 02:09:29.296692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.321 [2024-07-26 02:09:29.296718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.321 [2024-07-26 02:09:29.296734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.321 [2024-07-26 02:09:29.296749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.321 [2024-07-26 02:09:29.296780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.321 qpair failed and we were unable to recover it. 00:33:47.321 [2024-07-26 02:09:29.306621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.321 [2024-07-26 02:09:29.306737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.321 [2024-07-26 02:09:29.306763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.321 [2024-07-26 02:09:29.306779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.321 [2024-07-26 02:09:29.306794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.321 [2024-07-26 02:09:29.306826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.321 qpair failed and we were unable to recover it. 00:33:47.321 [2024-07-26 02:09:29.316736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.321 [2024-07-26 02:09:29.316865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.321 [2024-07-26 02:09:29.316892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.322 [2024-07-26 02:09:29.316926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.322 [2024-07-26 02:09:29.316941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.322 [2024-07-26 02:09:29.316993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.322 qpair failed and we were unable to recover it. 00:33:47.322 [2024-07-26 02:09:29.326658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.322 [2024-07-26 02:09:29.326783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.322 [2024-07-26 02:09:29.326810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.322 [2024-07-26 02:09:29.326826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.322 [2024-07-26 02:09:29.326840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.322 [2024-07-26 02:09:29.326872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.322 qpair failed and we were unable to recover it. 00:33:47.583 [2024-07-26 02:09:29.336713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.583 [2024-07-26 02:09:29.336865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.583 [2024-07-26 02:09:29.336892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.583 [2024-07-26 02:09:29.336909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.583 [2024-07-26 02:09:29.336924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.583 [2024-07-26 02:09:29.336970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.583 qpair failed and we were unable to recover it. 00:33:47.583 [2024-07-26 02:09:29.346697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.583 [2024-07-26 02:09:29.346809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.583 [2024-07-26 02:09:29.346836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.583 [2024-07-26 02:09:29.346851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.583 [2024-07-26 02:09:29.346866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.583 [2024-07-26 02:09:29.346898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.583 qpair failed and we were unable to recover it. 00:33:47.583 [2024-07-26 02:09:29.356747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.583 [2024-07-26 02:09:29.356878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.583 [2024-07-26 02:09:29.356905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.583 [2024-07-26 02:09:29.356921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.583 [2024-07-26 02:09:29.356936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.583 [2024-07-26 02:09:29.356967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.583 qpair failed and we were unable to recover it. 00:33:47.583 [2024-07-26 02:09:29.366782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.583 [2024-07-26 02:09:29.366911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.583 [2024-07-26 02:09:29.366943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.583 [2024-07-26 02:09:29.366960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.583 [2024-07-26 02:09:29.366975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.583 [2024-07-26 02:09:29.367006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.583 qpair failed and we were unable to recover it. 00:33:47.583 [2024-07-26 02:09:29.376834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.583 [2024-07-26 02:09:29.376966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.583 [2024-07-26 02:09:29.376993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.583 [2024-07-26 02:09:29.377009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.583 [2024-07-26 02:09:29.377024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.583 [2024-07-26 02:09:29.377055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.583 qpair failed and we were unable to recover it. 00:33:47.583 [2024-07-26 02:09:29.386832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.583 [2024-07-26 02:09:29.386951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.583 [2024-07-26 02:09:29.386977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.583 [2024-07-26 02:09:29.386993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.583 [2024-07-26 02:09:29.387008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.583 [2024-07-26 02:09:29.387042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.583 qpair failed and we were unable to recover it. 00:33:47.583 [2024-07-26 02:09:29.396876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.583 [2024-07-26 02:09:29.397002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.583 [2024-07-26 02:09:29.397028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.583 [2024-07-26 02:09:29.397044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.583 [2024-07-26 02:09:29.397065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.583 [2024-07-26 02:09:29.397099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.583 qpair failed and we were unable to recover it. 00:33:47.583 [2024-07-26 02:09:29.406893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.583 [2024-07-26 02:09:29.407014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.583 [2024-07-26 02:09:29.407041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.583 [2024-07-26 02:09:29.407057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.583 [2024-07-26 02:09:29.407080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.583 [2024-07-26 02:09:29.407118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.583 qpair failed and we were unable to recover it. 00:33:47.583 [2024-07-26 02:09:29.416912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.583 [2024-07-26 02:09:29.417028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.583 [2024-07-26 02:09:29.417054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.583 [2024-07-26 02:09:29.417078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.583 [2024-07-26 02:09:29.417093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.583 [2024-07-26 02:09:29.417125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.583 qpair failed and we were unable to recover it. 00:33:47.583 [2024-07-26 02:09:29.427028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.583 [2024-07-26 02:09:29.427157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.583 [2024-07-26 02:09:29.427188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.583 [2024-07-26 02:09:29.427208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.583 [2024-07-26 02:09:29.427225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.583 [2024-07-26 02:09:29.427259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.583 qpair failed and we were unable to recover it. 00:33:47.584 [2024-07-26 02:09:29.436973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.584 [2024-07-26 02:09:29.437089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.584 [2024-07-26 02:09:29.437116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.584 [2024-07-26 02:09:29.437132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.584 [2024-07-26 02:09:29.437146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.584 [2024-07-26 02:09:29.437177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.584 qpair failed and we were unable to recover it. 00:33:47.584 [2024-07-26 02:09:29.447032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.584 [2024-07-26 02:09:29.447154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.584 [2024-07-26 02:09:29.447181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.584 [2024-07-26 02:09:29.447196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.584 [2024-07-26 02:09:29.447212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.584 [2024-07-26 02:09:29.447244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.584 qpair failed and we were unable to recover it. 00:33:47.584 [2024-07-26 02:09:29.457026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.584 [2024-07-26 02:09:29.457154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.584 [2024-07-26 02:09:29.457182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.584 [2024-07-26 02:09:29.457198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.584 [2024-07-26 02:09:29.457213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.584 [2024-07-26 02:09:29.457257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.584 qpair failed and we were unable to recover it. 00:33:47.584 [2024-07-26 02:09:29.467068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.584 [2024-07-26 02:09:29.467177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.584 [2024-07-26 02:09:29.467203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.584 [2024-07-26 02:09:29.467218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.584 [2024-07-26 02:09:29.467234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.584 [2024-07-26 02:09:29.467264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.584 qpair failed and we were unable to recover it. 00:33:47.584 [2024-07-26 02:09:29.477078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.584 [2024-07-26 02:09:29.477194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.584 [2024-07-26 02:09:29.477220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.584 [2024-07-26 02:09:29.477236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.584 [2024-07-26 02:09:29.477251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.584 [2024-07-26 02:09:29.477282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.584 qpair failed and we were unable to recover it. 00:33:47.584 [2024-07-26 02:09:29.487139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.584 [2024-07-26 02:09:29.487263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.584 [2024-07-26 02:09:29.487289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.584 [2024-07-26 02:09:29.487305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.584 [2024-07-26 02:09:29.487320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.584 [2024-07-26 02:09:29.487362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.584 qpair failed and we were unable to recover it. 00:33:47.584 [2024-07-26 02:09:29.497160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.584 [2024-07-26 02:09:29.497278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.584 [2024-07-26 02:09:29.497305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.584 [2024-07-26 02:09:29.497321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.584 [2024-07-26 02:09:29.497341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.584 [2024-07-26 02:09:29.497388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.584 qpair failed and we were unable to recover it. 00:33:47.584 [2024-07-26 02:09:29.507285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.584 [2024-07-26 02:09:29.507443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.584 [2024-07-26 02:09:29.507469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.584 [2024-07-26 02:09:29.507485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.584 [2024-07-26 02:09:29.507502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.584 [2024-07-26 02:09:29.507547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.584 qpair failed and we were unable to recover it. 00:33:47.584 [2024-07-26 02:09:29.517199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.584 [2024-07-26 02:09:29.517318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.584 [2024-07-26 02:09:29.517345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.584 [2024-07-26 02:09:29.517361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.584 [2024-07-26 02:09:29.517376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.584 [2024-07-26 02:09:29.517406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.584 qpair failed and we were unable to recover it. 00:33:47.584 [2024-07-26 02:09:29.527374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.584 [2024-07-26 02:09:29.527511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.584 [2024-07-26 02:09:29.527537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.584 [2024-07-26 02:09:29.527553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.584 [2024-07-26 02:09:29.527570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.584 [2024-07-26 02:09:29.527614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.584 qpair failed and we were unable to recover it. 00:33:47.584 [2024-07-26 02:09:29.537252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.584 [2024-07-26 02:09:29.537362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.584 [2024-07-26 02:09:29.537387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.584 [2024-07-26 02:09:29.537403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.584 [2024-07-26 02:09:29.537417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.584 [2024-07-26 02:09:29.537450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.584 qpair failed and we were unable to recover it. 00:33:47.584 [2024-07-26 02:09:29.547273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.584 [2024-07-26 02:09:29.547382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.584 [2024-07-26 02:09:29.547408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.584 [2024-07-26 02:09:29.547424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.584 [2024-07-26 02:09:29.547439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.584 [2024-07-26 02:09:29.547470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.585 qpair failed and we were unable to recover it. 00:33:47.585 [2024-07-26 02:09:29.557418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.585 [2024-07-26 02:09:29.557527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.585 [2024-07-26 02:09:29.557554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.585 [2024-07-26 02:09:29.557586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.585 [2024-07-26 02:09:29.557600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.585 [2024-07-26 02:09:29.557646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.585 qpair failed and we were unable to recover it. 00:33:47.585 [2024-07-26 02:09:29.567328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.585 [2024-07-26 02:09:29.567446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.585 [2024-07-26 02:09:29.567472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.585 [2024-07-26 02:09:29.567488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.585 [2024-07-26 02:09:29.567502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.585 [2024-07-26 02:09:29.567533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.585 qpair failed and we were unable to recover it. 00:33:47.585 [2024-07-26 02:09:29.577439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.585 [2024-07-26 02:09:29.577558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.585 [2024-07-26 02:09:29.577584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.585 [2024-07-26 02:09:29.577599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.585 [2024-07-26 02:09:29.577615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.585 [2024-07-26 02:09:29.577645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.585 qpair failed and we were unable to recover it. 00:33:47.585 [2024-07-26 02:09:29.587384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.585 [2024-07-26 02:09:29.587494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.585 [2024-07-26 02:09:29.587521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.585 [2024-07-26 02:09:29.587548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.585 [2024-07-26 02:09:29.587563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.585 [2024-07-26 02:09:29.587594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.585 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-26 02:09:29.597393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.845 [2024-07-26 02:09:29.597502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.845 [2024-07-26 02:09:29.597528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.845 [2024-07-26 02:09:29.597544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.845 [2024-07-26 02:09:29.597559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.845 [2024-07-26 02:09:29.597591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-26 02:09:29.607432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.845 [2024-07-26 02:09:29.607548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.845 [2024-07-26 02:09:29.607575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.845 [2024-07-26 02:09:29.607591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.845 [2024-07-26 02:09:29.607605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.845 [2024-07-26 02:09:29.607637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-26 02:09:29.617466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.845 [2024-07-26 02:09:29.617576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.845 [2024-07-26 02:09:29.617604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.845 [2024-07-26 02:09:29.617620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.846 [2024-07-26 02:09:29.617635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.846 [2024-07-26 02:09:29.617667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-26 02:09:29.627507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.846 [2024-07-26 02:09:29.627665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.846 [2024-07-26 02:09:29.627692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.846 [2024-07-26 02:09:29.627709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.846 [2024-07-26 02:09:29.627739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.846 [2024-07-26 02:09:29.627770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-26 02:09:29.637500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.846 [2024-07-26 02:09:29.637607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.846 [2024-07-26 02:09:29.637634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.846 [2024-07-26 02:09:29.637651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.846 [2024-07-26 02:09:29.637667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.846 [2024-07-26 02:09:29.637699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-26 02:09:29.647578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.846 [2024-07-26 02:09:29.647700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.846 [2024-07-26 02:09:29.647728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.846 [2024-07-26 02:09:29.647745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.846 [2024-07-26 02:09:29.647759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.846 [2024-07-26 02:09:29.647805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-26 02:09:29.657568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.846 [2024-07-26 02:09:29.657679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.846 [2024-07-26 02:09:29.657705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.846 [2024-07-26 02:09:29.657721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.846 [2024-07-26 02:09:29.657735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.846 [2024-07-26 02:09:29.657766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-26 02:09:29.667584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.846 [2024-07-26 02:09:29.667689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.846 [2024-07-26 02:09:29.667714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.846 [2024-07-26 02:09:29.667730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.846 [2024-07-26 02:09:29.667744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.846 [2024-07-26 02:09:29.667775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-26 02:09:29.677675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.846 [2024-07-26 02:09:29.677805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.846 [2024-07-26 02:09:29.677838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.846 [2024-07-26 02:09:29.677855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.846 [2024-07-26 02:09:29.677870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.846 [2024-07-26 02:09:29.677901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-26 02:09:29.687707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.846 [2024-07-26 02:09:29.687843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.846 [2024-07-26 02:09:29.687870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.846 [2024-07-26 02:09:29.687886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.846 [2024-07-26 02:09:29.687901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.846 [2024-07-26 02:09:29.687949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-26 02:09:29.697688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.846 [2024-07-26 02:09:29.697798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.846 [2024-07-26 02:09:29.697827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.846 [2024-07-26 02:09:29.697843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.846 [2024-07-26 02:09:29.697857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.846 [2024-07-26 02:09:29.697889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-26 02:09:29.707700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.846 [2024-07-26 02:09:29.707809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.846 [2024-07-26 02:09:29.707836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.846 [2024-07-26 02:09:29.707852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.846 [2024-07-26 02:09:29.707866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.846 [2024-07-26 02:09:29.707899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-26 02:09:29.717723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.846 [2024-07-26 02:09:29.717834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.846 [2024-07-26 02:09:29.717861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.846 [2024-07-26 02:09:29.717878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.846 [2024-07-26 02:09:29.717893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.846 [2024-07-26 02:09:29.717931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-26 02:09:29.727799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.846 [2024-07-26 02:09:29.727914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.846 [2024-07-26 02:09:29.727941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.846 [2024-07-26 02:09:29.727958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.846 [2024-07-26 02:09:29.727973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.846 [2024-07-26 02:09:29.728018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-26 02:09:29.737800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.846 [2024-07-26 02:09:29.737910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.846 [2024-07-26 02:09:29.737937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.846 [2024-07-26 02:09:29.737953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.847 [2024-07-26 02:09:29.737967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.847 [2024-07-26 02:09:29.737998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-26 02:09:29.747923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.847 [2024-07-26 02:09:29.748038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.847 [2024-07-26 02:09:29.748070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.847 [2024-07-26 02:09:29.748087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.847 [2024-07-26 02:09:29.748101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.847 [2024-07-26 02:09:29.748132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-26 02:09:29.757851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.847 [2024-07-26 02:09:29.757954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.847 [2024-07-26 02:09:29.757980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.847 [2024-07-26 02:09:29.757995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.847 [2024-07-26 02:09:29.758010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.847 [2024-07-26 02:09:29.758041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-26 02:09:29.767902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.847 [2024-07-26 02:09:29.768019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.847 [2024-07-26 02:09:29.768051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.847 [2024-07-26 02:09:29.768083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.847 [2024-07-26 02:09:29.768108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.847 [2024-07-26 02:09:29.768140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-26 02:09:29.777901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.847 [2024-07-26 02:09:29.778037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.847 [2024-07-26 02:09:29.778073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.847 [2024-07-26 02:09:29.778103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.847 [2024-07-26 02:09:29.778117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.847 [2024-07-26 02:09:29.778148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-26 02:09:29.787917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.847 [2024-07-26 02:09:29.788025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.847 [2024-07-26 02:09:29.788050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.847 [2024-07-26 02:09:29.788073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.847 [2024-07-26 02:09:29.788089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.847 [2024-07-26 02:09:29.788121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-26 02:09:29.797989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.847 [2024-07-26 02:09:29.798100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.847 [2024-07-26 02:09:29.798125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.847 [2024-07-26 02:09:29.798140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.847 [2024-07-26 02:09:29.798155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.847 [2024-07-26 02:09:29.798189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-26 02:09:29.808015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.847 [2024-07-26 02:09:29.808143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.847 [2024-07-26 02:09:29.808169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.847 [2024-07-26 02:09:29.808186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.847 [2024-07-26 02:09:29.808200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.847 [2024-07-26 02:09:29.808237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-26 02:09:29.818018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.847 [2024-07-26 02:09:29.818140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.847 [2024-07-26 02:09:29.818165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.847 [2024-07-26 02:09:29.818182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.847 [2024-07-26 02:09:29.818197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.847 [2024-07-26 02:09:29.818229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-26 02:09:29.828123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.847 [2024-07-26 02:09:29.828240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.847 [2024-07-26 02:09:29.828266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.847 [2024-07-26 02:09:29.828282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.847 [2024-07-26 02:09:29.828296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.847 [2024-07-26 02:09:29.828335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-26 02:09:29.838087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.847 [2024-07-26 02:09:29.838209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.847 [2024-07-26 02:09:29.838240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.847 [2024-07-26 02:09:29.838257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.847 [2024-07-26 02:09:29.838272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.847 [2024-07-26 02:09:29.838306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-26 02:09:29.848124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.847 [2024-07-26 02:09:29.848236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.847 [2024-07-26 02:09:29.848263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.847 [2024-07-26 02:09:29.848279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.847 [2024-07-26 02:09:29.848293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:47.847 [2024-07-26 02:09:29.848325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.847 qpair failed and we were unable to recover it. 00:33:48.108 [2024-07-26 02:09:29.858147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.108 [2024-07-26 02:09:29.858264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.109 [2024-07-26 02:09:29.858297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.109 [2024-07-26 02:09:29.858314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.109 [2024-07-26 02:09:29.858328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.109 [2024-07-26 02:09:29.858372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.109 qpair failed and we were unable to recover it. 00:33:48.109 [2024-07-26 02:09:29.868181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.109 [2024-07-26 02:09:29.868293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.109 [2024-07-26 02:09:29.868321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.109 [2024-07-26 02:09:29.868337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.109 [2024-07-26 02:09:29.868351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.109 [2024-07-26 02:09:29.868383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.109 qpair failed and we were unable to recover it. 00:33:48.109 [2024-07-26 02:09:29.878189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.109 [2024-07-26 02:09:29.878303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.109 [2024-07-26 02:09:29.878329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.109 [2024-07-26 02:09:29.878345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.109 [2024-07-26 02:09:29.878359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.109 [2024-07-26 02:09:29.878392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.109 qpair failed and we were unable to recover it. 00:33:48.109 [2024-07-26 02:09:29.888240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.109 [2024-07-26 02:09:29.888375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.109 [2024-07-26 02:09:29.888401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.109 [2024-07-26 02:09:29.888419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.109 [2024-07-26 02:09:29.888434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.109 [2024-07-26 02:09:29.888467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.109 qpair failed and we were unable to recover it. 00:33:48.109 [2024-07-26 02:09:29.898275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.109 [2024-07-26 02:09:29.898387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.109 [2024-07-26 02:09:29.898412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.109 [2024-07-26 02:09:29.898428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.109 [2024-07-26 02:09:29.898447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.109 [2024-07-26 02:09:29.898480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.109 qpair failed and we were unable to recover it. 00:33:48.109 [2024-07-26 02:09:29.908281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.109 [2024-07-26 02:09:29.908429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.109 [2024-07-26 02:09:29.908456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.109 [2024-07-26 02:09:29.908471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.109 [2024-07-26 02:09:29.908485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.109 [2024-07-26 02:09:29.908528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.109 qpair failed and we were unable to recover it. 00:33:48.109 [2024-07-26 02:09:29.918314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.109 [2024-07-26 02:09:29.918439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.109 [2024-07-26 02:09:29.918466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.109 [2024-07-26 02:09:29.918481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.109 [2024-07-26 02:09:29.918495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.109 [2024-07-26 02:09:29.918527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.109 qpair failed and we were unable to recover it. 00:33:48.109 [2024-07-26 02:09:29.928361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.109 [2024-07-26 02:09:29.928524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.109 [2024-07-26 02:09:29.928551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.109 [2024-07-26 02:09:29.928566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.109 [2024-07-26 02:09:29.928579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.109 [2024-07-26 02:09:29.928609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.109 qpair failed and we were unable to recover it. 00:33:48.109 [2024-07-26 02:09:29.938349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.109 [2024-07-26 02:09:29.938471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.109 [2024-07-26 02:09:29.938499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.109 [2024-07-26 02:09:29.938514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.109 [2024-07-26 02:09:29.938528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.109 [2024-07-26 02:09:29.938560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.109 qpair failed and we were unable to recover it. 00:33:48.109 [2024-07-26 02:09:29.948409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.109 [2024-07-26 02:09:29.948529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.109 [2024-07-26 02:09:29.948556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.109 [2024-07-26 02:09:29.948572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.109 [2024-07-26 02:09:29.948585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.109 [2024-07-26 02:09:29.948616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.109 qpair failed and we were unable to recover it. 00:33:48.109 [2024-07-26 02:09:29.958463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.109 [2024-07-26 02:09:29.958575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.109 [2024-07-26 02:09:29.958602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.109 [2024-07-26 02:09:29.958618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.109 [2024-07-26 02:09:29.958631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.109 [2024-07-26 02:09:29.958671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.109 qpair failed and we were unable to recover it. 00:33:48.109 [2024-07-26 02:09:29.968436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.109 [2024-07-26 02:09:29.968546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.109 [2024-07-26 02:09:29.968573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.109 [2024-07-26 02:09:29.968589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.109 [2024-07-26 02:09:29.968603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.109 [2024-07-26 02:09:29.968635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.109 qpair failed and we were unable to recover it. 00:33:48.109 [2024-07-26 02:09:29.978528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.110 [2024-07-26 02:09:29.978641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.110 [2024-07-26 02:09:29.978666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.110 [2024-07-26 02:09:29.978681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.110 [2024-07-26 02:09:29.978694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.110 [2024-07-26 02:09:29.978725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.110 qpair failed and we were unable to recover it. 00:33:48.110 [2024-07-26 02:09:29.988573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.110 [2024-07-26 02:09:29.988717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.110 [2024-07-26 02:09:29.988744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.110 [2024-07-26 02:09:29.988765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.110 [2024-07-26 02:09:29.988781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.110 [2024-07-26 02:09:29.988826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.110 qpair failed and we were unable to recover it. 00:33:48.110 [2024-07-26 02:09:29.998533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.110 [2024-07-26 02:09:29.998642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.110 [2024-07-26 02:09:29.998668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.110 [2024-07-26 02:09:29.998685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.110 [2024-07-26 02:09:29.998699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.110 [2024-07-26 02:09:29.998742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.110 qpair failed and we were unable to recover it. 00:33:48.110 [2024-07-26 02:09:30.008595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.110 [2024-07-26 02:09:30.008765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.110 [2024-07-26 02:09:30.008793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.110 [2024-07-26 02:09:30.008810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.110 [2024-07-26 02:09:30.008825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.110 [2024-07-26 02:09:30.008862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.110 qpair failed and we were unable to recover it. 00:33:48.110 [2024-07-26 02:09:30.018640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.110 [2024-07-26 02:09:30.018764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.110 [2024-07-26 02:09:30.018796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.110 [2024-07-26 02:09:30.018813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.110 [2024-07-26 02:09:30.018828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.110 [2024-07-26 02:09:30.018874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.110 qpair failed and we were unable to recover it. 00:33:48.110 [2024-07-26 02:09:30.028660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.110 [2024-07-26 02:09:30.028774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.110 [2024-07-26 02:09:30.028803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.110 [2024-07-26 02:09:30.028819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.110 [2024-07-26 02:09:30.028834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.110 [2024-07-26 02:09:30.028866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.110 qpair failed and we were unable to recover it. 00:33:48.110 [2024-07-26 02:09:30.038669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.110 [2024-07-26 02:09:30.038793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.110 [2024-07-26 02:09:30.038821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.110 [2024-07-26 02:09:30.038837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.110 [2024-07-26 02:09:30.038851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.110 [2024-07-26 02:09:30.038882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.110 qpair failed and we were unable to recover it. 00:33:48.110 [2024-07-26 02:09:30.048709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.110 [2024-07-26 02:09:30.048818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.110 [2024-07-26 02:09:30.048845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.110 [2024-07-26 02:09:30.048860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.110 [2024-07-26 02:09:30.048874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.110 [2024-07-26 02:09:30.048905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.110 qpair failed and we were unable to recover it. 00:33:48.110 [2024-07-26 02:09:30.058704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.110 [2024-07-26 02:09:30.058828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.110 [2024-07-26 02:09:30.058854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.110 [2024-07-26 02:09:30.058870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.110 [2024-07-26 02:09:30.058885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.110 [2024-07-26 02:09:30.058916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.110 qpair failed and we were unable to recover it. 00:33:48.110 [2024-07-26 02:09:30.068726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.110 [2024-07-26 02:09:30.068828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.110 [2024-07-26 02:09:30.068853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.110 [2024-07-26 02:09:30.068869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.110 [2024-07-26 02:09:30.068883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.110 [2024-07-26 02:09:30.068924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.110 qpair failed and we were unable to recover it. 00:33:48.110 [2024-07-26 02:09:30.078742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.110 [2024-07-26 02:09:30.078849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.110 [2024-07-26 02:09:30.078875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.110 [2024-07-26 02:09:30.078898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.110 [2024-07-26 02:09:30.078914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.110 [2024-07-26 02:09:30.078946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.110 qpair failed and we were unable to recover it. 00:33:48.110 [2024-07-26 02:09:30.088791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.110 [2024-07-26 02:09:30.088910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.110 [2024-07-26 02:09:30.088936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.110 [2024-07-26 02:09:30.088952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.110 [2024-07-26 02:09:30.088966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.110 [2024-07-26 02:09:30.088998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.110 qpair failed and we were unable to recover it. 00:33:48.110 [2024-07-26 02:09:30.098823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.111 [2024-07-26 02:09:30.098964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.111 [2024-07-26 02:09:30.098992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.111 [2024-07-26 02:09:30.099009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.111 [2024-07-26 02:09:30.099023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.111 [2024-07-26 02:09:30.099054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.111 qpair failed and we were unable to recover it. 00:33:48.111 [2024-07-26 02:09:30.108827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.111 [2024-07-26 02:09:30.108932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.111 [2024-07-26 02:09:30.108958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.111 [2024-07-26 02:09:30.108974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.111 [2024-07-26 02:09:30.108988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.111 [2024-07-26 02:09:30.109019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.111 qpair failed and we were unable to recover it. 00:33:48.372 [2024-07-26 02:09:30.118857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.372 [2024-07-26 02:09:30.118964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.372 [2024-07-26 02:09:30.118990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.372 [2024-07-26 02:09:30.119006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.372 [2024-07-26 02:09:30.119020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.372 [2024-07-26 02:09:30.119053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.372 qpair failed and we were unable to recover it. 00:33:48.372 [2024-07-26 02:09:30.128906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.372 [2024-07-26 02:09:30.129026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.372 [2024-07-26 02:09:30.129052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.372 [2024-07-26 02:09:30.129074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.372 [2024-07-26 02:09:30.129090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.372 [2024-07-26 02:09:30.129134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.372 qpair failed and we were unable to recover it. 00:33:48.372 [2024-07-26 02:09:30.138924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.372 [2024-07-26 02:09:30.139030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.372 [2024-07-26 02:09:30.139056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.372 [2024-07-26 02:09:30.139082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.372 [2024-07-26 02:09:30.139097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.372 [2024-07-26 02:09:30.139127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.372 qpair failed and we were unable to recover it. 00:33:48.372 [2024-07-26 02:09:30.148942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.372 [2024-07-26 02:09:30.149053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.372 [2024-07-26 02:09:30.149085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.373 [2024-07-26 02:09:30.149101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.373 [2024-07-26 02:09:30.149116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.373 [2024-07-26 02:09:30.149147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.373 qpair failed and we were unable to recover it. 00:33:48.373 [2024-07-26 02:09:30.159070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.373 [2024-07-26 02:09:30.159179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.373 [2024-07-26 02:09:30.159205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.373 [2024-07-26 02:09:30.159221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.373 [2024-07-26 02:09:30.159236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.373 [2024-07-26 02:09:30.159266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.373 qpair failed and we were unable to recover it. 00:33:48.373 [2024-07-26 02:09:30.169030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.373 [2024-07-26 02:09:30.169150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.373 [2024-07-26 02:09:30.169180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.373 [2024-07-26 02:09:30.169197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.373 [2024-07-26 02:09:30.169211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.373 [2024-07-26 02:09:30.169244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.373 qpair failed and we were unable to recover it. 00:33:48.373 [2024-07-26 02:09:30.179028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.373 [2024-07-26 02:09:30.179143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.373 [2024-07-26 02:09:30.179169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.373 [2024-07-26 02:09:30.179186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.373 [2024-07-26 02:09:30.179201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.373 [2024-07-26 02:09:30.179233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.373 qpair failed and we were unable to recover it. 00:33:48.373 [2024-07-26 02:09:30.189093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.373 [2024-07-26 02:09:30.189213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.373 [2024-07-26 02:09:30.189241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.373 [2024-07-26 02:09:30.189257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.373 [2024-07-26 02:09:30.189271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.373 [2024-07-26 02:09:30.189303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.373 qpair failed and we were unable to recover it. 00:33:48.373 [2024-07-26 02:09:30.199102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.373 [2024-07-26 02:09:30.199216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.373 [2024-07-26 02:09:30.199241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.373 [2024-07-26 02:09:30.199258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.373 [2024-07-26 02:09:30.199272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.373 [2024-07-26 02:09:30.199304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.373 qpair failed and we were unable to recover it. 00:33:48.373 [2024-07-26 02:09:30.209123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.373 [2024-07-26 02:09:30.209236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.373 [2024-07-26 02:09:30.209261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.373 [2024-07-26 02:09:30.209277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.373 [2024-07-26 02:09:30.209291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.373 [2024-07-26 02:09:30.209329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.373 qpair failed and we were unable to recover it. 00:33:48.373 [2024-07-26 02:09:30.219202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.373 [2024-07-26 02:09:30.219364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.373 [2024-07-26 02:09:30.219390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.373 [2024-07-26 02:09:30.219407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.373 [2024-07-26 02:09:30.219421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.373 [2024-07-26 02:09:30.219452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.373 qpair failed and we were unable to recover it. 00:33:48.373 [2024-07-26 02:09:30.229259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.373 [2024-07-26 02:09:30.229364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.373 [2024-07-26 02:09:30.229390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.373 [2024-07-26 02:09:30.229406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.373 [2024-07-26 02:09:30.229420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.373 [2024-07-26 02:09:30.229451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.373 qpair failed and we were unable to recover it. 00:33:48.373 [2024-07-26 02:09:30.239191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.373 [2024-07-26 02:09:30.239296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.373 [2024-07-26 02:09:30.239321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.373 [2024-07-26 02:09:30.239337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.373 [2024-07-26 02:09:30.239352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.373 [2024-07-26 02:09:30.239383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.373 qpair failed and we were unable to recover it. 00:33:48.373 [2024-07-26 02:09:30.249273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.373 [2024-07-26 02:09:30.249420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.373 [2024-07-26 02:09:30.249448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.373 [2024-07-26 02:09:30.249465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.373 [2024-07-26 02:09:30.249479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.373 [2024-07-26 02:09:30.249524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.373 qpair failed and we were unable to recover it. 00:33:48.373 [2024-07-26 02:09:30.259270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.373 [2024-07-26 02:09:30.259393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.373 [2024-07-26 02:09:30.259426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.373 [2024-07-26 02:09:30.259443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.373 [2024-07-26 02:09:30.259458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.373 [2024-07-26 02:09:30.259489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.373 qpair failed and we were unable to recover it. 00:33:48.374 [2024-07-26 02:09:30.269290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.374 [2024-07-26 02:09:30.269421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.374 [2024-07-26 02:09:30.269449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.374 [2024-07-26 02:09:30.269465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.374 [2024-07-26 02:09:30.269479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.374 [2024-07-26 02:09:30.269511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.374 qpair failed and we were unable to recover it. 00:33:48.374 [2024-07-26 02:09:30.279392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.374 [2024-07-26 02:09:30.279502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.374 [2024-07-26 02:09:30.279528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.374 [2024-07-26 02:09:30.279544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.374 [2024-07-26 02:09:30.279573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.374 [2024-07-26 02:09:30.279604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.374 qpair failed and we were unable to recover it. 00:33:48.374 [2024-07-26 02:09:30.289405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.374 [2024-07-26 02:09:30.289529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.374 [2024-07-26 02:09:30.289558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.374 [2024-07-26 02:09:30.289578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.374 [2024-07-26 02:09:30.289593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.374 [2024-07-26 02:09:30.289639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.374 qpair failed and we were unable to recover it. 00:33:48.374 [2024-07-26 02:09:30.299449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.374 [2024-07-26 02:09:30.299599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.374 [2024-07-26 02:09:30.299628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.374 [2024-07-26 02:09:30.299660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.374 [2024-07-26 02:09:30.299679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.374 [2024-07-26 02:09:30.299724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.374 qpair failed and we were unable to recover it. 00:33:48.374 [2024-07-26 02:09:30.309435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.374 [2024-07-26 02:09:30.309541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.374 [2024-07-26 02:09:30.309566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.374 [2024-07-26 02:09:30.309582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.374 [2024-07-26 02:09:30.309596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.374 [2024-07-26 02:09:30.309641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.374 qpair failed and we were unable to recover it. 00:33:48.374 [2024-07-26 02:09:30.319436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.374 [2024-07-26 02:09:30.319541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.374 [2024-07-26 02:09:30.319567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.374 [2024-07-26 02:09:30.319582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.374 [2024-07-26 02:09:30.319597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.374 [2024-07-26 02:09:30.319629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.374 qpair failed and we were unable to recover it. 00:33:48.374 [2024-07-26 02:09:30.329573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.374 [2024-07-26 02:09:30.329689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.374 [2024-07-26 02:09:30.329716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.374 [2024-07-26 02:09:30.329732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.374 [2024-07-26 02:09:30.329747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.374 [2024-07-26 02:09:30.329779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.374 qpair failed and we were unable to recover it. 00:33:48.374 [2024-07-26 02:09:30.339557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.374 [2024-07-26 02:09:30.339675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.374 [2024-07-26 02:09:30.339700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.374 [2024-07-26 02:09:30.339716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.374 [2024-07-26 02:09:30.339730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.374 [2024-07-26 02:09:30.339762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.374 qpair failed and we were unable to recover it. 00:33:48.374 [2024-07-26 02:09:30.349533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.374 [2024-07-26 02:09:30.349650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.374 [2024-07-26 02:09:30.349677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.374 [2024-07-26 02:09:30.349692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.374 [2024-07-26 02:09:30.349707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.374 [2024-07-26 02:09:30.349737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.374 qpair failed and we were unable to recover it. 00:33:48.374 [2024-07-26 02:09:30.359625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.374 [2024-07-26 02:09:30.359731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.374 [2024-07-26 02:09:30.359757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.374 [2024-07-26 02:09:30.359772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.374 [2024-07-26 02:09:30.359787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.374 [2024-07-26 02:09:30.359830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.374 qpair failed and we were unable to recover it. 00:33:48.374 [2024-07-26 02:09:30.369597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.374 [2024-07-26 02:09:30.369729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.374 [2024-07-26 02:09:30.369757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.374 [2024-07-26 02:09:30.369773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.374 [2024-07-26 02:09:30.369787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.374 [2024-07-26 02:09:30.369818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.374 qpair failed and we were unable to recover it. 00:33:48.374 [2024-07-26 02:09:30.379658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.374 [2024-07-26 02:09:30.379774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.374 [2024-07-26 02:09:30.379802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.374 [2024-07-26 02:09:30.379818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.374 [2024-07-26 02:09:30.379832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.375 [2024-07-26 02:09:30.379863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.375 qpair failed and we were unable to recover it. 00:33:48.635 [2024-07-26 02:09:30.389654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.635 [2024-07-26 02:09:30.389761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.635 [2024-07-26 02:09:30.389787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.635 [2024-07-26 02:09:30.389814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.635 [2024-07-26 02:09:30.389829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.635 [2024-07-26 02:09:30.389861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.635 qpair failed and we were unable to recover it. 00:33:48.635 [2024-07-26 02:09:30.399708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.635 [2024-07-26 02:09:30.399825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.635 [2024-07-26 02:09:30.399851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.635 [2024-07-26 02:09:30.399867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.635 [2024-07-26 02:09:30.399881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.635 [2024-07-26 02:09:30.399913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.635 qpair failed and we were unable to recover it. 00:33:48.635 [2024-07-26 02:09:30.409730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.635 [2024-07-26 02:09:30.409894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.635 [2024-07-26 02:09:30.409924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.635 [2024-07-26 02:09:30.409942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.635 [2024-07-26 02:09:30.409974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.635 [2024-07-26 02:09:30.410007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.635 qpair failed and we were unable to recover it. 00:33:48.635 [2024-07-26 02:09:30.419757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.635 [2024-07-26 02:09:30.419871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.635 [2024-07-26 02:09:30.419897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.635 [2024-07-26 02:09:30.419912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.635 [2024-07-26 02:09:30.419927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.635 [2024-07-26 02:09:30.419958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.635 qpair failed and we were unable to recover it. 00:33:48.635 [2024-07-26 02:09:30.429778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.635 [2024-07-26 02:09:30.429886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.635 [2024-07-26 02:09:30.429913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.635 [2024-07-26 02:09:30.429929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.636 [2024-07-26 02:09:30.429944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.636 [2024-07-26 02:09:30.429976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.636 qpair failed and we were unable to recover it. 00:33:48.636 [2024-07-26 02:09:30.439806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.636 [2024-07-26 02:09:30.439928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.636 [2024-07-26 02:09:30.439956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.636 [2024-07-26 02:09:30.439972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.636 [2024-07-26 02:09:30.439986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.636 [2024-07-26 02:09:30.440045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.636 qpair failed and we were unable to recover it. 00:33:48.636 [2024-07-26 02:09:30.449865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.636 [2024-07-26 02:09:30.449984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.636 [2024-07-26 02:09:30.450009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.636 [2024-07-26 02:09:30.450025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.636 [2024-07-26 02:09:30.450042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.636 [2024-07-26 02:09:30.450083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.636 qpair failed and we were unable to recover it. 00:33:48.636 [2024-07-26 02:09:30.459860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.636 [2024-07-26 02:09:30.459970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.636 [2024-07-26 02:09:30.459995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.636 [2024-07-26 02:09:30.460011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.636 [2024-07-26 02:09:30.460026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.636 [2024-07-26 02:09:30.460057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.636 qpair failed and we were unable to recover it. 00:33:48.636 [2024-07-26 02:09:30.469975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.636 [2024-07-26 02:09:30.470093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.636 [2024-07-26 02:09:30.470119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.636 [2024-07-26 02:09:30.470134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.636 [2024-07-26 02:09:30.470149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.636 [2024-07-26 02:09:30.470180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.636 qpair failed and we were unable to recover it. 00:33:48.636 [2024-07-26 02:09:30.479917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.636 [2024-07-26 02:09:30.480030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.636 [2024-07-26 02:09:30.480072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.636 [2024-07-26 02:09:30.480095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.636 [2024-07-26 02:09:30.480111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.636 [2024-07-26 02:09:30.480142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.636 qpair failed and we were unable to recover it. 00:33:48.636 [2024-07-26 02:09:30.489971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.636 [2024-07-26 02:09:30.490100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.636 [2024-07-26 02:09:30.490126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.636 [2024-07-26 02:09:30.490142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.636 [2024-07-26 02:09:30.490156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.636 [2024-07-26 02:09:30.490187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.636 qpair failed and we were unable to recover it. 00:33:48.636 [2024-07-26 02:09:30.500071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.636 [2024-07-26 02:09:30.500201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.636 [2024-07-26 02:09:30.500229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.636 [2024-07-26 02:09:30.500248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.636 [2024-07-26 02:09:30.500263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.636 [2024-07-26 02:09:30.500294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.636 qpair failed and we were unable to recover it. 00:33:48.636 [2024-07-26 02:09:30.509974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.636 [2024-07-26 02:09:30.510088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.636 [2024-07-26 02:09:30.510114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.636 [2024-07-26 02:09:30.510130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.636 [2024-07-26 02:09:30.510144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.636 [2024-07-26 02:09:30.510175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.636 qpair failed and we were unable to recover it. 00:33:48.636 [2024-07-26 02:09:30.520005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.636 [2024-07-26 02:09:30.520119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.636 [2024-07-26 02:09:30.520144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.636 [2024-07-26 02:09:30.520160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.636 [2024-07-26 02:09:30.520174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.636 [2024-07-26 02:09:30.520206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.636 qpair failed and we were unable to recover it. 00:33:48.636 [2024-07-26 02:09:30.530075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.636 [2024-07-26 02:09:30.530232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.636 [2024-07-26 02:09:30.530260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.636 [2024-07-26 02:09:30.530276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.636 [2024-07-26 02:09:30.530291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.636 [2024-07-26 02:09:30.530322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.636 qpair failed and we were unable to recover it. 00:33:48.636 [2024-07-26 02:09:30.540082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.636 [2024-07-26 02:09:30.540192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.636 [2024-07-26 02:09:30.540218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.636 [2024-07-26 02:09:30.540233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.636 [2024-07-26 02:09:30.540248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.636 [2024-07-26 02:09:30.540279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.636 qpair failed and we were unable to recover it. 00:33:48.636 [2024-07-26 02:09:30.550112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.636 [2024-07-26 02:09:30.550225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.636 [2024-07-26 02:09:30.550251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.637 [2024-07-26 02:09:30.550267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.637 [2024-07-26 02:09:30.550280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.637 [2024-07-26 02:09:30.550312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.637 qpair failed and we were unable to recover it. 00:33:48.637 [2024-07-26 02:09:30.560150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.637 [2024-07-26 02:09:30.560277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.637 [2024-07-26 02:09:30.560304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.637 [2024-07-26 02:09:30.560321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.637 [2024-07-26 02:09:30.560340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.637 [2024-07-26 02:09:30.560373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.637 qpair failed and we were unable to recover it. 00:33:48.637 [2024-07-26 02:09:30.570258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.637 [2024-07-26 02:09:30.570416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.637 [2024-07-26 02:09:30.570450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.637 [2024-07-26 02:09:30.570468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.637 [2024-07-26 02:09:30.570482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.637 [2024-07-26 02:09:30.570514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.637 qpair failed and we were unable to recover it. 00:33:48.637 [2024-07-26 02:09:30.580177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.637 [2024-07-26 02:09:30.580288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.637 [2024-07-26 02:09:30.580314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.637 [2024-07-26 02:09:30.580330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.637 [2024-07-26 02:09:30.580344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.637 [2024-07-26 02:09:30.580375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.637 qpair failed and we were unable to recover it. 00:33:48.637 [2024-07-26 02:09:30.590240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.637 [2024-07-26 02:09:30.590355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.637 [2024-07-26 02:09:30.590383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.637 [2024-07-26 02:09:30.590401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.637 [2024-07-26 02:09:30.590419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.637 [2024-07-26 02:09:30.590454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.637 qpair failed and we were unable to recover it. 00:33:48.637 [2024-07-26 02:09:30.600250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.637 [2024-07-26 02:09:30.600362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.637 [2024-07-26 02:09:30.600389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.637 [2024-07-26 02:09:30.600405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.637 [2024-07-26 02:09:30.600419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.637 [2024-07-26 02:09:30.600451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.637 qpair failed and we were unable to recover it. 00:33:48.637 [2024-07-26 02:09:30.610341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.637 [2024-07-26 02:09:30.610454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.637 [2024-07-26 02:09:30.610482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.637 [2024-07-26 02:09:30.610498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.637 [2024-07-26 02:09:30.610513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.637 [2024-07-26 02:09:30.610551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.637 qpair failed and we were unable to recover it. 00:33:48.637 [2024-07-26 02:09:30.620311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.637 [2024-07-26 02:09:30.620425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.637 [2024-07-26 02:09:30.620452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.637 [2024-07-26 02:09:30.620469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.637 [2024-07-26 02:09:30.620483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.637 [2024-07-26 02:09:30.620516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.637 qpair failed and we were unable to recover it. 00:33:48.637 [2024-07-26 02:09:30.630480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.637 [2024-07-26 02:09:30.630597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.637 [2024-07-26 02:09:30.630639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.637 [2024-07-26 02:09:30.630655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.637 [2024-07-26 02:09:30.630670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.637 [2024-07-26 02:09:30.630716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.637 qpair failed and we were unable to recover it. 00:33:48.637 [2024-07-26 02:09:30.640380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.637 [2024-07-26 02:09:30.640540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.637 [2024-07-26 02:09:30.640567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.637 [2024-07-26 02:09:30.640583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.637 [2024-07-26 02:09:30.640599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.637 [2024-07-26 02:09:30.640630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.637 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-26 02:09:30.650395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.898 [2024-07-26 02:09:30.650521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.898 [2024-07-26 02:09:30.650549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.898 [2024-07-26 02:09:30.650565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.898 [2024-07-26 02:09:30.650581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.898 [2024-07-26 02:09:30.650627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-26 02:09:30.660537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.898 [2024-07-26 02:09:30.660652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.898 [2024-07-26 02:09:30.660684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.898 [2024-07-26 02:09:30.660700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.898 [2024-07-26 02:09:30.660716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.898 [2024-07-26 02:09:30.660747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-26 02:09:30.670474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.898 [2024-07-26 02:09:30.670588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.898 [2024-07-26 02:09:30.670618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.898 [2024-07-26 02:09:30.670635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.898 [2024-07-26 02:09:30.670653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.898 [2024-07-26 02:09:30.670686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-26 02:09:30.680523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.898 [2024-07-26 02:09:30.680675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.898 [2024-07-26 02:09:30.680703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.898 [2024-07-26 02:09:30.680719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.898 [2024-07-26 02:09:30.680735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.898 [2024-07-26 02:09:30.680781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-26 02:09:30.690531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.898 [2024-07-26 02:09:30.690651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.898 [2024-07-26 02:09:30.690678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.898 [2024-07-26 02:09:30.690694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.898 [2024-07-26 02:09:30.690709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.898 [2024-07-26 02:09:30.690740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-26 02:09:30.700523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.898 [2024-07-26 02:09:30.700638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.898 [2024-07-26 02:09:30.700664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.898 [2024-07-26 02:09:30.700680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.898 [2024-07-26 02:09:30.700700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.898 [2024-07-26 02:09:30.700732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-26 02:09:30.710619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.898 [2024-07-26 02:09:30.710729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.898 [2024-07-26 02:09:30.710756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.898 [2024-07-26 02:09:30.710772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.898 [2024-07-26 02:09:30.710787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.898 [2024-07-26 02:09:30.710817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-26 02:09:30.720605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.898 [2024-07-26 02:09:30.720727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.898 [2024-07-26 02:09:30.720753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.898 [2024-07-26 02:09:30.720769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.898 [2024-07-26 02:09:30.720785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.898 [2024-07-26 02:09:30.720816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-26 02:09:30.730649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.898 [2024-07-26 02:09:30.730761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.898 [2024-07-26 02:09:30.730788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.898 [2024-07-26 02:09:30.730804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.898 [2024-07-26 02:09:30.730819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.898 [2024-07-26 02:09:30.730851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-26 02:09:30.740751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.898 [2024-07-26 02:09:30.740861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.898 [2024-07-26 02:09:30.740888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.898 [2024-07-26 02:09:30.740903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.898 [2024-07-26 02:09:30.740919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.898 [2024-07-26 02:09:30.740949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.898 qpair failed and we were unable to recover it. 00:33:48.898 [2024-07-26 02:09:30.750682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.899 [2024-07-26 02:09:30.750802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.899 [2024-07-26 02:09:30.750830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.899 [2024-07-26 02:09:30.750847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.899 [2024-07-26 02:09:30.750865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.899 [2024-07-26 02:09:30.750897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-26 02:09:30.760678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.899 [2024-07-26 02:09:30.760836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.899 [2024-07-26 02:09:30.760863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.899 [2024-07-26 02:09:30.760879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.899 [2024-07-26 02:09:30.760895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.899 [2024-07-26 02:09:30.760926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-26 02:09:30.770820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.899 [2024-07-26 02:09:30.770938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.899 [2024-07-26 02:09:30.770965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.899 [2024-07-26 02:09:30.770982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.899 [2024-07-26 02:09:30.770997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.899 [2024-07-26 02:09:30.771028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-26 02:09:30.780769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.899 [2024-07-26 02:09:30.780877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.899 [2024-07-26 02:09:30.780904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.899 [2024-07-26 02:09:30.780920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.899 [2024-07-26 02:09:30.780935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.899 [2024-07-26 02:09:30.780966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-26 02:09:30.790851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.899 [2024-07-26 02:09:30.790961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.899 [2024-07-26 02:09:30.790988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.899 [2024-07-26 02:09:30.791003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.899 [2024-07-26 02:09:30.791025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.899 [2024-07-26 02:09:30.791064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-26 02:09:30.800846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.899 [2024-07-26 02:09:30.800966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.899 [2024-07-26 02:09:30.800993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.899 [2024-07-26 02:09:30.801009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.899 [2024-07-26 02:09:30.801024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.899 [2024-07-26 02:09:30.801055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-26 02:09:30.810869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.899 [2024-07-26 02:09:30.811035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.899 [2024-07-26 02:09:30.811068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.899 [2024-07-26 02:09:30.811085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.899 [2024-07-26 02:09:30.811101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.899 [2024-07-26 02:09:30.811133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-26 02:09:30.820867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.899 [2024-07-26 02:09:30.820978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.899 [2024-07-26 02:09:30.821005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.899 [2024-07-26 02:09:30.821021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.899 [2024-07-26 02:09:30.821036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.899 [2024-07-26 02:09:30.821082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-26 02:09:30.830883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.899 [2024-07-26 02:09:30.830989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.899 [2024-07-26 02:09:30.831016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.899 [2024-07-26 02:09:30.831031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.899 [2024-07-26 02:09:30.831064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.899 [2024-07-26 02:09:30.831099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-26 02:09:30.840930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.899 [2024-07-26 02:09:30.841051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.899 [2024-07-26 02:09:30.841084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.899 [2024-07-26 02:09:30.841101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.899 [2024-07-26 02:09:30.841116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.899 [2024-07-26 02:09:30.841147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-26 02:09:30.850999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.899 [2024-07-26 02:09:30.851167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.899 [2024-07-26 02:09:30.851194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.899 [2024-07-26 02:09:30.851211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.899 [2024-07-26 02:09:30.851225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.899 [2024-07-26 02:09:30.851257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.899 [2024-07-26 02:09:30.861055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.899 [2024-07-26 02:09:30.861198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.899 [2024-07-26 02:09:30.861226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.899 [2024-07-26 02:09:30.861241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.899 [2024-07-26 02:09:30.861257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.899 [2024-07-26 02:09:30.861300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.899 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-26 02:09:30.871001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.900 [2024-07-26 02:09:30.871113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.900 [2024-07-26 02:09:30.871139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.900 [2024-07-26 02:09:30.871154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.900 [2024-07-26 02:09:30.871167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.900 [2024-07-26 02:09:30.871198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-26 02:09:30.881049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.900 [2024-07-26 02:09:30.881177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.900 [2024-07-26 02:09:30.881203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.900 [2024-07-26 02:09:30.881225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.900 [2024-07-26 02:09:30.881241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.900 [2024-07-26 02:09:30.881273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-26 02:09:30.891100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.900 [2024-07-26 02:09:30.891238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.900 [2024-07-26 02:09:30.891265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.900 [2024-07-26 02:09:30.891286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.900 [2024-07-26 02:09:30.891303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.900 [2024-07-26 02:09:30.891350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.900 qpair failed and we were unable to recover it. 00:33:48.900 [2024-07-26 02:09:30.901195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.900 [2024-07-26 02:09:30.901320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.900 [2024-07-26 02:09:30.901355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.900 [2024-07-26 02:09:30.901372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.900 [2024-07-26 02:09:30.901386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:48.900 [2024-07-26 02:09:30.901419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.900 qpair failed and we were unable to recover it. 00:33:49.159 [2024-07-26 02:09:30.911127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.159 [2024-07-26 02:09:30.911232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.159 [2024-07-26 02:09:30.911259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.159 [2024-07-26 02:09:30.911274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.159 [2024-07-26 02:09:30.911289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.159 [2024-07-26 02:09:30.911322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.159 qpair failed and we were unable to recover it. 00:33:49.159 [2024-07-26 02:09:30.921167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.159 [2024-07-26 02:09:30.921278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.159 [2024-07-26 02:09:30.921304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.159 [2024-07-26 02:09:30.921320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.159 [2024-07-26 02:09:30.921345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.159 [2024-07-26 02:09:30.921376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.159 qpair failed and we were unable to recover it. 00:33:49.159 [2024-07-26 02:09:30.931212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.159 [2024-07-26 02:09:30.931336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.159 [2024-07-26 02:09:30.931362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.159 [2024-07-26 02:09:30.931378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.159 [2024-07-26 02:09:30.931392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.159 [2024-07-26 02:09:30.931422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.159 qpair failed and we were unable to recover it. 00:33:49.159 [2024-07-26 02:09:30.941216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.159 [2024-07-26 02:09:30.941355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.159 [2024-07-26 02:09:30.941381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.159 [2024-07-26 02:09:30.941397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.159 [2024-07-26 02:09:30.941412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.159 [2024-07-26 02:09:30.941442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.159 qpair failed and we were unable to recover it. 00:33:49.159 [2024-07-26 02:09:30.951283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.160 [2024-07-26 02:09:30.951415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.160 [2024-07-26 02:09:30.951442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.160 [2024-07-26 02:09:30.951457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.160 [2024-07-26 02:09:30.951472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.160 [2024-07-26 02:09:30.951502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.160 qpair failed and we were unable to recover it. 00:33:49.160 [2024-07-26 02:09:30.961302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.160 [2024-07-26 02:09:30.961435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.160 [2024-07-26 02:09:30.961461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.160 [2024-07-26 02:09:30.961478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.160 [2024-07-26 02:09:30.961492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.160 [2024-07-26 02:09:30.961524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.160 qpair failed and we were unable to recover it. 00:33:49.160 [2024-07-26 02:09:30.971333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.160 [2024-07-26 02:09:30.971452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.160 [2024-07-26 02:09:30.971483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.160 [2024-07-26 02:09:30.971500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.160 [2024-07-26 02:09:30.971515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.160 [2024-07-26 02:09:30.971548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.160 qpair failed and we were unable to recover it. 00:33:49.160 [2024-07-26 02:09:30.981383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.160 [2024-07-26 02:09:30.981506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.160 [2024-07-26 02:09:30.981530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.160 [2024-07-26 02:09:30.981545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.160 [2024-07-26 02:09:30.981559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.160 [2024-07-26 02:09:30.981589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.160 qpair failed and we were unable to recover it. 00:33:49.160 [2024-07-26 02:09:30.991359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.160 [2024-07-26 02:09:30.991488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.160 [2024-07-26 02:09:30.991514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.160 [2024-07-26 02:09:30.991530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.160 [2024-07-26 02:09:30.991545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.160 [2024-07-26 02:09:30.991577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.160 qpair failed and we were unable to recover it. 00:33:49.160 [2024-07-26 02:09:31.001436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.160 [2024-07-26 02:09:31.001545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.160 [2024-07-26 02:09:31.001573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.160 [2024-07-26 02:09:31.001589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.160 [2024-07-26 02:09:31.001608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.160 [2024-07-26 02:09:31.001640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.160 qpair failed and we were unable to recover it. 00:33:49.160 [2024-07-26 02:09:31.011500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.160 [2024-07-26 02:09:31.011642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.160 [2024-07-26 02:09:31.011669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.160 [2024-07-26 02:09:31.011685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.160 [2024-07-26 02:09:31.011700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.160 [2024-07-26 02:09:31.011738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.160 qpair failed and we were unable to recover it. 00:33:49.160 [2024-07-26 02:09:31.021523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.160 [2024-07-26 02:09:31.021640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.160 [2024-07-26 02:09:31.021667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.160 [2024-07-26 02:09:31.021683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.160 [2024-07-26 02:09:31.021697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.160 [2024-07-26 02:09:31.021730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.160 qpair failed and we were unable to recover it. 00:33:49.160 [2024-07-26 02:09:31.031550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.160 [2024-07-26 02:09:31.031661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.160 [2024-07-26 02:09:31.031687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.160 [2024-07-26 02:09:31.031703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.160 [2024-07-26 02:09:31.031718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.160 [2024-07-26 02:09:31.031750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.160 qpair failed and we were unable to recover it. 00:33:49.160 [2024-07-26 02:09:31.041620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.160 [2024-07-26 02:09:31.041734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.160 [2024-07-26 02:09:31.041776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.160 [2024-07-26 02:09:31.041796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.160 [2024-07-26 02:09:31.041810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.160 [2024-07-26 02:09:31.041857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.160 qpair failed and we were unable to recover it. 00:33:49.160 [2024-07-26 02:09:31.051539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.160 [2024-07-26 02:09:31.051697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.160 [2024-07-26 02:09:31.051725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.160 [2024-07-26 02:09:31.051741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.160 [2024-07-26 02:09:31.051768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.160 [2024-07-26 02:09:31.051801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.160 qpair failed and we were unable to recover it. 00:33:49.160 [2024-07-26 02:09:31.061621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.160 [2024-07-26 02:09:31.061738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.160 [2024-07-26 02:09:31.061771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.160 [2024-07-26 02:09:31.061788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.160 [2024-07-26 02:09:31.061803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.160 [2024-07-26 02:09:31.061836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.160 qpair failed and we were unable to recover it. 00:33:49.160 [2024-07-26 02:09:31.071613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.161 [2024-07-26 02:09:31.071729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.161 [2024-07-26 02:09:31.071756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.161 [2024-07-26 02:09:31.071772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.161 [2024-07-26 02:09:31.071796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.161 [2024-07-26 02:09:31.071828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.161 qpair failed and we were unable to recover it. 00:33:49.161 [2024-07-26 02:09:31.081657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.161 [2024-07-26 02:09:31.081779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.161 [2024-07-26 02:09:31.081806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.161 [2024-07-26 02:09:31.081822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.161 [2024-07-26 02:09:31.081836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.161 [2024-07-26 02:09:31.081867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.161 qpair failed and we were unable to recover it. 00:33:49.161 [2024-07-26 02:09:31.091683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.161 [2024-07-26 02:09:31.091797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.161 [2024-07-26 02:09:31.091823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.161 [2024-07-26 02:09:31.091838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.161 [2024-07-26 02:09:31.091853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.161 [2024-07-26 02:09:31.091885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.161 qpair failed and we were unable to recover it. 00:33:49.161 [2024-07-26 02:09:31.101704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.161 [2024-07-26 02:09:31.101827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.161 [2024-07-26 02:09:31.101853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.161 [2024-07-26 02:09:31.101869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.161 [2024-07-26 02:09:31.101883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.161 [2024-07-26 02:09:31.101923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.161 qpair failed and we were unable to recover it. 00:33:49.161 [2024-07-26 02:09:31.111741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.161 [2024-07-26 02:09:31.111859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.161 [2024-07-26 02:09:31.111886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.161 [2024-07-26 02:09:31.111901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.161 [2024-07-26 02:09:31.111916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.161 [2024-07-26 02:09:31.111948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.161 qpair failed and we were unable to recover it. 00:33:49.161 [2024-07-26 02:09:31.121738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.161 [2024-07-26 02:09:31.121854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.161 [2024-07-26 02:09:31.121880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.161 [2024-07-26 02:09:31.121896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.161 [2024-07-26 02:09:31.121911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd158000b90 00:33:49.161 [2024-07-26 02:09:31.121943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:49.161 qpair failed and we were unable to recover it. 00:33:49.161 [2024-07-26 02:09:31.131775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.161 [2024-07-26 02:09:31.131896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.161 [2024-07-26 02:09:31.131931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.161 [2024-07-26 02:09:31.131950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.161 [2024-07-26 02:09:31.131965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd150000b90 00:33:49.161 [2024-07-26 02:09:31.131999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:49.161 qpair failed and we were unable to recover it. 00:33:49.161 [2024-07-26 02:09:31.141826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.161 [2024-07-26 02:09:31.141990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.161 [2024-07-26 02:09:31.142018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.161 [2024-07-26 02:09:31.142034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.161 [2024-07-26 02:09:31.142049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd150000b90 00:33:49.161 [2024-07-26 02:09:31.142089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:49.161 qpair failed and we were unable to recover it. 00:33:49.161 [2024-07-26 02:09:31.151823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.161 [2024-07-26 02:09:31.151945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.161 [2024-07-26 02:09:31.151978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.161 [2024-07-26 02:09:31.151995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.161 [2024-07-26 02:09:31.152011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:49.161 [2024-07-26 02:09:31.152044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.161 qpair failed and we were unable to recover it. 00:33:49.161 [2024-07-26 02:09:31.161839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.161 [2024-07-26 02:09:31.161965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.161 [2024-07-26 02:09:31.161994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.161 [2024-07-26 02:09:31.162010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.161 [2024-07-26 02:09:31.162026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd148000b90 00:33:49.161 [2024-07-26 02:09:31.162067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.161 qpair failed and we were unable to recover it. 00:33:49.161 [2024-07-26 02:09:31.162163] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:33:49.161 A controller has encountered a failure and is being reset. 00:33:49.420 Controller properly reset. 00:33:49.420 Initializing NVMe Controllers 00:33:49.420 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:49.420 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:49.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:49.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:49.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:49.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:49.420 Initialization complete. Launching workers. 00:33:49.420 Starting thread on core 1 00:33:49.420 Starting thread on core 2 00:33:49.420 Starting thread on core 3 00:33:49.420 Starting thread on core 0 00:33:49.420 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:49.420 00:33:49.420 real 0m10.871s 00:33:49.420 user 0m18.180s 00:33:49.420 sys 0m5.567s 00:33:49.420 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:49.420 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:49.420 ************************************ 00:33:49.420 END TEST nvmf_target_disconnect_tc2 00:33:49.420 ************************************ 00:33:49.420 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:49.420 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:49.420 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:49.420 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:49.420 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:33:49.420 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:49.420 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:33:49.420 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:49.420 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:49.420 rmmod nvme_tcp 00:33:49.420 rmmod nvme_fabrics 00:33:49.420 rmmod nvme_keyring 00:33:49.420 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2422122 ']' 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2422122 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2422122 ']' 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2422122 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2422122 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2422122' 00:33:49.680 killing process with pid 2422122 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2422122 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2422122 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.680 02:09:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.215 02:09:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:52.215 00:33:52.215 real 0m15.532s 00:33:52.215 user 0m44.647s 00:33:52.215 sys 0m7.500s 00:33:52.215 02:09:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:52.215 02:09:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:52.215 ************************************ 00:33:52.215 END TEST nvmf_target_disconnect 00:33:52.215 ************************************ 00:33:52.215 02:09:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:52.215 00:33:52.215 real 6m31.722s 00:33:52.215 user 16m40.275s 00:33:52.215 sys 1m27.599s 00:33:52.215 02:09:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:52.215 02:09:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.215 ************************************ 00:33:52.215 END TEST nvmf_host 00:33:52.215 ************************************ 00:33:52.215 00:33:52.215 real 27m5.871s 00:33:52.215 user 73m44.788s 00:33:52.215 sys 6m24.631s 00:33:52.215 02:09:33 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:52.215 02:09:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.215 ************************************ 00:33:52.215 END TEST nvmf_tcp 00:33:52.215 ************************************ 00:33:52.215 02:09:33 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:33:52.215 02:09:33 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:52.215 02:09:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:52.215 02:09:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:52.215 02:09:33 -- common/autotest_common.sh@10 -- # set +x 00:33:52.215 ************************************ 00:33:52.215 START TEST spdkcli_nvmf_tcp 00:33:52.215 ************************************ 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:52.215 * Looking for test storage... 00:33:52.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:52.215 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2423322 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2423322 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2423322 ']' 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:52.216 02:09:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.216 [2024-07-26 02:09:33.935378] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:33:52.216 [2024-07-26 02:09:33.935465] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2423322 ] 00:33:52.216 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.216 [2024-07-26 02:09:33.992837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:52.216 [2024-07-26 02:09:34.079226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.216 [2024-07-26 02:09:34.079229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:52.216 02:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:52.216 02:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:33:52.216 02:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:52.216 02:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:52.216 02:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.216 02:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:52.216 02:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:52.216 02:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:52.216 02:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:52.216 02:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.216 02:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:52.216 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:52.216 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:52.216 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:52.216 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:52.216 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:52.216 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:52.216 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:52.216 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:52.216 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:52.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:52.216 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:52.216 ' 00:33:55.502 [2024-07-26 02:09:36.763770] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:56.068 [2024-07-26 02:09:37.984041] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:58.604 [2024-07-26 02:09:40.251308] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:00.506 [2024-07-26 02:09:42.221386] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:01.892 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:01.892 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:01.892 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:01.892 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:01.892 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:01.892 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:01.892 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:01.892 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:01.892 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:01.892 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:01.892 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:01.892 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:01.892 02:09:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:01.892 02:09:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:01.892 02:09:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:01.892 02:09:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:01.892 02:09:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:01.892 02:09:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:01.892 02:09:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:01.892 02:09:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:02.460 02:09:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:02.460 02:09:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:02.460 02:09:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:02.460 02:09:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:02.460 02:09:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:02.460 02:09:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:02.460 02:09:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:02.460 02:09:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:02.460 02:09:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:02.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:02.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:02.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:02.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:02.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:02.460 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:02.460 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:02.460 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:02.460 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:02.460 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:02.460 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:02.460 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:02.460 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:02.460 ' 00:34:07.729 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:07.729 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:07.729 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:07.729 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:07.729 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:07.729 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:07.729 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:07.729 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:07.729 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:07.729 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:07.729 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:07.729 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:07.729 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:07.729 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:07.729 02:09:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:07.729 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:07.729 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.729 02:09:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2423322 00:34:07.729 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2423322 ']' 00:34:07.729 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2423322 00:34:07.729 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:34:07.729 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:07.729 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2423322 00:34:07.729 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:07.729 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:07.729 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2423322' 00:34:07.729 killing process with pid 2423322 00:34:07.730 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2423322 00:34:07.730 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2423322 00:34:07.988 02:09:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:07.988 02:09:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:07.988 02:09:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2423322 ']' 00:34:07.988 02:09:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2423322 00:34:07.988 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2423322 ']' 00:34:07.988 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2423322 00:34:07.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2423322) - No such process 00:34:07.988 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2423322 is not found' 00:34:07.988 Process with pid 2423322 is not found 00:34:07.988 02:09:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:07.988 02:09:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:07.988 02:09:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:07.988 00:34:07.988 real 0m16.003s 00:34:07.988 user 0m33.823s 00:34:07.988 sys 0m0.810s 00:34:07.988 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:07.988 02:09:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.988 ************************************ 00:34:07.988 END TEST spdkcli_nvmf_tcp 00:34:07.988 ************************************ 00:34:07.988 02:09:49 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:07.988 02:09:49 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:07.988 02:09:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:07.988 02:09:49 -- common/autotest_common.sh@10 -- # set +x 00:34:07.989 ************************************ 00:34:07.989 START TEST nvmf_identify_passthru 00:34:07.989 ************************************ 00:34:07.989 02:09:49 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:07.989 * Looking for test storage... 00:34:07.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:07.989 02:09:49 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.989 02:09:49 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.989 02:09:49 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.989 02:09:49 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.989 02:09:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.989 02:09:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.989 02:09:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.989 02:09:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:07.989 02:09:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:07.989 02:09:49 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.989 02:09:49 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.989 02:09:49 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.989 02:09:49 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.989 02:09:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.989 02:09:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.989 02:09:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.989 02:09:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:07.989 02:09:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.989 02:09:49 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.989 02:09:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:07.989 02:09:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:07.989 02:09:49 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:07.989 02:09:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.891 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:09.891 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:09.891 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:09.891 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:09.891 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:09.891 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:09.891 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:09.891 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:09.891 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:09.891 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:09.892 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:09.892 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:09.892 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:09.892 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:09.892 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:10.152 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:10.152 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:10.152 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:10.152 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:10.152 02:09:51 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:10.152 02:09:52 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:10.152 02:09:52 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:10.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:10.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:34:10.152 00:34:10.152 --- 10.0.0.2 ping statistics --- 00:34:10.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.152 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:34:10.152 02:09:52 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:10.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:10.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:34:10.152 00:34:10.152 --- 10.0.0.1 ping statistics --- 00:34:10.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.152 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:34:10.152 02:09:52 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:10.152 02:09:52 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:10.152 02:09:52 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:10.152 02:09:52 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.152 02:09:52 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:10.152 02:09:52 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:10.152 02:09:52 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.152 02:09:52 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:10.152 02:09:52 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:10.152 02:09:52 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:10.152 02:09:52 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:10.152 02:09:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:10.152 02:09:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:10.152 02:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:34:10.152 02:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:34:10.152 02:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:34:10.152 02:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:34:10.152 02:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:34:10.152 02:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:34:10.152 02:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:10.152 02:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:10.152 02:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:34:10.152 02:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:34:10.152 02:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:34:10.152 02:09:52 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:34:10.152 02:09:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:10.152 02:09:52 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:10.152 02:09:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:10.152 02:09:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:10.152 02:09:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:10.152 EAL: No free 2048 kB hugepages reported on node 1 00:34:14.353 02:09:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:14.353 02:09:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:14.353 02:09:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:14.353 02:09:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:14.353 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.626 02:10:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:18.626 02:10:00 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:18.626 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:18.626 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.626 02:10:00 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:18.626 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:18.626 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.626 02:10:00 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2427824 00:34:18.626 02:10:00 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:18.626 02:10:00 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:18.626 02:10:00 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2427824 00:34:18.626 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2427824 ']' 00:34:18.626 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.626 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:18.626 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.626 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:18.626 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.884 [2024-07-26 02:10:00.650574] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:34:18.884 [2024-07-26 02:10:00.650655] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:18.884 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.884 [2024-07-26 02:10:00.720682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:18.884 [2024-07-26 02:10:00.810006] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:18.884 [2024-07-26 02:10:00.810093] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:18.884 [2024-07-26 02:10:00.810109] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:18.884 [2024-07-26 02:10:00.810120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:18.884 [2024-07-26 02:10:00.810144] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:18.884 [2024-07-26 02:10:00.810193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:18.884 [2024-07-26 02:10:00.810261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:18.884 [2024-07-26 02:10:00.810318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:18.884 [2024-07-26 02:10:00.810321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.884 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:18.884 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:34:18.884 02:10:00 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:18.884 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.884 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.884 INFO: Log level set to 20 00:34:18.884 INFO: Requests: 00:34:18.884 { 00:34:18.884 "jsonrpc": "2.0", 00:34:18.884 "method": "nvmf_set_config", 00:34:18.884 "id": 1, 00:34:18.884 "params": { 00:34:18.884 "admin_cmd_passthru": { 00:34:18.884 "identify_ctrlr": true 00:34:18.884 } 00:34:18.884 } 00:34:18.884 } 00:34:18.884 00:34:18.884 INFO: response: 00:34:18.884 { 00:34:18.884 "jsonrpc": "2.0", 00:34:18.884 "id": 1, 00:34:18.884 "result": true 00:34:18.884 } 00:34:18.884 00:34:18.885 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.885 02:10:00 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:18.885 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.885 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.885 INFO: Setting log level to 20 00:34:18.885 INFO: Setting log level to 20 00:34:18.885 INFO: Log level set to 20 00:34:18.885 INFO: Log level set to 20 00:34:18.885 INFO: Requests: 00:34:18.885 { 00:34:18.885 "jsonrpc": "2.0", 00:34:18.885 "method": "framework_start_init", 00:34:18.885 "id": 1 00:34:18.885 } 00:34:18.885 00:34:18.885 INFO: Requests: 00:34:18.885 { 00:34:18.885 "jsonrpc": "2.0", 00:34:18.885 "method": "framework_start_init", 00:34:18.885 "id": 1 00:34:18.885 } 00:34:18.885 00:34:19.143 [2024-07-26 02:10:00.972329] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:19.143 INFO: response: 00:34:19.143 { 00:34:19.143 "jsonrpc": "2.0", 00:34:19.143 "id": 1, 00:34:19.143 "result": true 00:34:19.143 } 00:34:19.143 00:34:19.143 INFO: response: 00:34:19.143 { 00:34:19.143 "jsonrpc": "2.0", 00:34:19.143 "id": 1, 00:34:19.143 "result": true 00:34:19.143 } 00:34:19.143 00:34:19.143 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.143 02:10:00 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:19.143 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.143 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.143 INFO: Setting log level to 40 00:34:19.143 INFO: Setting log level to 40 00:34:19.143 INFO: Setting log level to 40 00:34:19.143 [2024-07-26 02:10:00.982404] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.143 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.143 02:10:00 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:19.143 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:19.143 02:10:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.143 02:10:01 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:19.143 02:10:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.143 02:10:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.431 Nvme0n1 00:34:22.431 02:10:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.431 02:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:22.431 02:10:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.431 02:10:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.431 02:10:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.431 02:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:22.431 02:10:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.431 02:10:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.431 02:10:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.431 02:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.431 02:10:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.431 02:10:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.431 [2024-07-26 02:10:03.872134] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.431 02:10:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.431 02:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:22.431 02:10:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.431 02:10:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.431 [ 00:34:22.431 { 00:34:22.431 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:22.431 "subtype": "Discovery", 00:34:22.431 "listen_addresses": [], 00:34:22.431 "allow_any_host": true, 00:34:22.431 "hosts": [] 00:34:22.431 }, 00:34:22.431 { 00:34:22.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:22.431 "subtype": "NVMe", 00:34:22.431 "listen_addresses": [ 00:34:22.431 { 00:34:22.431 "trtype": "TCP", 00:34:22.431 "adrfam": "IPv4", 00:34:22.431 "traddr": "10.0.0.2", 00:34:22.431 "trsvcid": "4420" 00:34:22.431 } 00:34:22.431 ], 00:34:22.431 "allow_any_host": true, 00:34:22.431 "hosts": [], 00:34:22.431 "serial_number": "SPDK00000000000001", 00:34:22.431 "model_number": "SPDK bdev Controller", 00:34:22.431 "max_namespaces": 1, 00:34:22.431 "min_cntlid": 1, 00:34:22.431 "max_cntlid": 65519, 00:34:22.431 "namespaces": [ 00:34:22.431 { 00:34:22.431 "nsid": 1, 00:34:22.431 "bdev_name": "Nvme0n1", 00:34:22.431 "name": "Nvme0n1", 00:34:22.431 "nguid": "EE240AB739F74F26B50E3F6962A46980", 00:34:22.431 "uuid": "ee240ab7-39f7-4f26-b50e-3f6962a46980" 00:34:22.431 } 00:34:22.431 ] 00:34:22.431 } 00:34:22.431 ] 00:34:22.431 02:10:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.431 02:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:22.431 02:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:22.431 02:10:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:22.431 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.431 02:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:34:22.431 02:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:22.431 02:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:22.431 02:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:22.431 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.431 02:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:22.431 02:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:34:22.431 02:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:22.431 02:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:22.431 02:10:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.431 02:10:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.431 02:10:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.431 02:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:22.431 02:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:22.431 02:10:04 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:22.431 02:10:04 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:22.431 02:10:04 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:22.431 02:10:04 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:22.431 02:10:04 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:22.431 02:10:04 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:22.431 rmmod nvme_tcp 00:34:22.431 rmmod nvme_fabrics 00:34:22.431 rmmod nvme_keyring 00:34:22.431 02:10:04 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:22.431 02:10:04 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:22.431 02:10:04 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:22.431 02:10:04 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2427824 ']' 00:34:22.431 02:10:04 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2427824 00:34:22.431 02:10:04 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2427824 ']' 00:34:22.431 02:10:04 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2427824 00:34:22.431 02:10:04 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:34:22.431 02:10:04 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:22.431 02:10:04 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2427824 00:34:22.431 02:10:04 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:22.431 02:10:04 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:22.431 02:10:04 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2427824' 00:34:22.431 killing process with pid 2427824 00:34:22.431 02:10:04 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2427824 00:34:22.431 02:10:04 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2427824 00:34:24.331 02:10:05 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:24.331 02:10:05 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:24.331 02:10:05 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:24.331 02:10:05 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:24.332 02:10:05 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:24.332 02:10:05 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.332 02:10:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:24.332 02:10:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.231 02:10:07 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:26.231 00:34:26.231 real 0m18.045s 00:34:26.231 user 0m26.623s 00:34:26.231 sys 0m2.347s 00:34:26.231 02:10:07 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:26.231 02:10:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:26.231 ************************************ 00:34:26.231 END TEST nvmf_identify_passthru 00:34:26.231 ************************************ 00:34:26.231 02:10:07 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:26.231 02:10:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:26.231 02:10:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:26.231 02:10:07 -- common/autotest_common.sh@10 -- # set +x 00:34:26.231 ************************************ 00:34:26.231 START TEST nvmf_dif 00:34:26.231 ************************************ 00:34:26.231 02:10:07 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:26.231 * Looking for test storage... 00:34:26.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:26.231 02:10:08 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:26.231 02:10:08 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:26.231 02:10:08 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:26.231 02:10:08 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:26.231 02:10:08 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.231 02:10:08 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.231 02:10:08 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.231 02:10:08 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:26.231 02:10:08 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:26.231 02:10:08 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:26.231 02:10:08 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:26.231 02:10:08 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:26.231 02:10:08 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:26.231 02:10:08 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.231 02:10:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:26.231 02:10:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:26.231 02:10:08 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:26.231 02:10:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:28.140 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:28.140 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:28.140 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:28.140 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:28.140 02:10:09 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:28.141 02:10:09 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:28.141 02:10:09 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:28.141 02:10:09 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:28.141 02:10:09 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:28.141 02:10:09 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:28.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:28.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:34:28.141 00:34:28.141 --- 10.0.0.2 ping statistics --- 00:34:28.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.141 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:34:28.141 02:10:09 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:28.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:28.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:34:28.141 00:34:28.141 --- 10.0.0.1 ping statistics --- 00:34:28.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.141 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:34:28.141 02:10:09 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:28.141 02:10:09 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:34:28.141 02:10:09 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:28.141 02:10:09 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:29.082 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:29.082 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:29.082 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:29.082 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:29.082 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:29.082 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:29.082 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:29.082 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:29.082 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:29.082 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:29.082 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:29.082 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:29.082 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:29.082 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:29.082 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:29.082 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:29.082 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:29.349 02:10:11 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:29.349 02:10:11 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:29.349 02:10:11 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:29.349 02:10:11 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:29.349 02:10:11 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:29.349 02:10:11 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:29.349 02:10:11 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:29.349 02:10:11 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:29.349 02:10:11 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:29.349 02:10:11 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:29.349 02:10:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:29.349 02:10:11 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2431078 00:34:29.349 02:10:11 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:29.349 02:10:11 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2431078 00:34:29.349 02:10:11 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2431078 ']' 00:34:29.349 02:10:11 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.349 02:10:11 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:29.349 02:10:11 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.349 02:10:11 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:29.349 02:10:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:29.349 [2024-07-26 02:10:11.261870] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:34:29.349 [2024-07-26 02:10:11.261942] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:29.349 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.349 [2024-07-26 02:10:11.325462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.608 [2024-07-26 02:10:11.413220] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:29.608 [2024-07-26 02:10:11.413287] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:29.608 [2024-07-26 02:10:11.413314] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:29.608 [2024-07-26 02:10:11.413328] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:29.608 [2024-07-26 02:10:11.413339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:29.608 [2024-07-26 02:10:11.413370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.608 02:10:11 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:29.608 02:10:11 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:34:29.608 02:10:11 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:29.608 02:10:11 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:29.608 02:10:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:29.608 02:10:11 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:29.608 02:10:11 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:29.608 02:10:11 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:29.608 02:10:11 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.608 02:10:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:29.608 [2024-07-26 02:10:11.548159] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:29.608 02:10:11 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.608 02:10:11 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:29.608 02:10:11 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:29.608 02:10:11 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:29.608 02:10:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:29.608 ************************************ 00:34:29.608 START TEST fio_dif_1_default 00:34:29.608 ************************************ 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.608 bdev_null0 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.608 [2024-07-26 02:10:11.604390] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:29.608 { 00:34:29.608 "params": { 00:34:29.608 "name": "Nvme$subsystem", 00:34:29.608 "trtype": "$TEST_TRANSPORT", 00:34:29.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.608 "adrfam": "ipv4", 00:34:29.608 "trsvcid": "$NVMF_PORT", 00:34:29.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.608 "hdgst": ${hdgst:-false}, 00:34:29.608 "ddgst": ${ddgst:-false} 00:34:29.608 }, 00:34:29.608 "method": "bdev_nvme_attach_controller" 00:34:29.608 } 00:34:29.608 EOF 00:34:29.608 )") 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:34:29.608 02:10:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:29.608 "params": { 00:34:29.608 "name": "Nvme0", 00:34:29.608 "trtype": "tcp", 00:34:29.609 "traddr": "10.0.0.2", 00:34:29.609 "adrfam": "ipv4", 00:34:29.609 "trsvcid": "4420", 00:34:29.609 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.609 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.609 "hdgst": false, 00:34:29.609 "ddgst": false 00:34:29.609 }, 00:34:29.609 "method": "bdev_nvme_attach_controller" 00:34:29.609 }' 00:34:29.867 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:29.867 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:29.867 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.867 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.867 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:29.867 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:29.867 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:29.867 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:29.867 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:29.867 02:10:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.867 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:29.867 fio-3.35 00:34:29.867 Starting 1 thread 00:34:30.125 EAL: No free 2048 kB hugepages reported on node 1 00:34:42.338 00:34:42.338 filename0: (groupid=0, jobs=1): err= 0: pid=2431301: Fri Jul 26 02:10:22 2024 00:34:42.338 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10014msec) 00:34:42.338 slat (nsec): min=4487, max=57523, avg=9424.17, stdev=3283.80 00:34:42.338 clat (usec): min=40787, max=48180, avg=41008.32, stdev=466.51 00:34:42.338 lat (usec): min=40795, max=48194, avg=41017.75, stdev=466.59 00:34:42.338 clat percentiles (usec): 00:34:42.338 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:42.338 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:42.338 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:42.338 | 99.00th=[41681], 99.50th=[41681], 99.90th=[47973], 99.95th=[47973], 00:34:42.338 | 99.99th=[47973] 00:34:42.338 bw ( KiB/s): min= 384, max= 416, per=99.52%, avg=388.80, stdev=11.72, samples=20 00:34:42.338 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:42.338 lat (msec) : 50=100.00% 00:34:42.338 cpu : usr=90.20%, sys=9.52%, ctx=14, majf=0, minf=242 00:34:42.338 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.338 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.338 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:42.338 00:34:42.338 Run status group 0 (all jobs): 00:34:42.338 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10014-10014msec 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.338 00:34:42.338 real 0m11.064s 00:34:42.338 user 0m9.989s 00:34:42.338 sys 0m1.237s 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:42.338 ************************************ 00:34:42.338 END TEST fio_dif_1_default 00:34:42.338 ************************************ 00:34:42.338 02:10:22 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:42.338 02:10:22 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:42.338 02:10:22 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:42.338 02:10:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:42.338 ************************************ 00:34:42.338 START TEST fio_dif_1_multi_subsystems 00:34:42.338 ************************************ 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:42.338 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:42.339 bdev_null0 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:42.339 [2024-07-26 02:10:22.712872] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:42.339 bdev_null1 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:42.339 { 00:34:42.339 "params": { 00:34:42.339 "name": "Nvme$subsystem", 00:34:42.339 "trtype": "$TEST_TRANSPORT", 00:34:42.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:42.339 "adrfam": "ipv4", 00:34:42.339 "trsvcid": "$NVMF_PORT", 00:34:42.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:42.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:42.339 "hdgst": ${hdgst:-false}, 00:34:42.339 "ddgst": ${ddgst:-false} 00:34:42.339 }, 00:34:42.339 "method": "bdev_nvme_attach_controller" 00:34:42.339 } 00:34:42.339 EOF 00:34:42.339 )") 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:42.339 { 00:34:42.339 "params": { 00:34:42.339 "name": "Nvme$subsystem", 00:34:42.339 "trtype": "$TEST_TRANSPORT", 00:34:42.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:42.339 "adrfam": "ipv4", 00:34:42.339 "trsvcid": "$NVMF_PORT", 00:34:42.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:42.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:42.339 "hdgst": ${hdgst:-false}, 00:34:42.339 "ddgst": ${ddgst:-false} 00:34:42.339 }, 00:34:42.339 "method": "bdev_nvme_attach_controller" 00:34:42.339 } 00:34:42.339 EOF 00:34:42.339 )") 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:42.339 "params": { 00:34:42.339 "name": "Nvme0", 00:34:42.339 "trtype": "tcp", 00:34:42.339 "traddr": "10.0.0.2", 00:34:42.339 "adrfam": "ipv4", 00:34:42.339 "trsvcid": "4420", 00:34:42.339 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:42.339 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:42.339 "hdgst": false, 00:34:42.339 "ddgst": false 00:34:42.339 }, 00:34:42.339 "method": "bdev_nvme_attach_controller" 00:34:42.339 },{ 00:34:42.339 "params": { 00:34:42.339 "name": "Nvme1", 00:34:42.339 "trtype": "tcp", 00:34:42.339 "traddr": "10.0.0.2", 00:34:42.339 "adrfam": "ipv4", 00:34:42.339 "trsvcid": "4420", 00:34:42.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:42.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:42.339 "hdgst": false, 00:34:42.339 "ddgst": false 00:34:42.339 }, 00:34:42.339 "method": "bdev_nvme_attach_controller" 00:34:42.339 }' 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:42.339 02:10:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:42.339 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:42.339 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:42.339 fio-3.35 00:34:42.339 Starting 2 threads 00:34:42.339 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.341 00:34:52.341 filename0: (groupid=0, jobs=1): err= 0: pid=2432696: Fri Jul 26 02:10:33 2024 00:34:52.341 read: IOPS=96, BW=388KiB/s (397kB/s)(3888KiB/10033msec) 00:34:52.341 slat (nsec): min=7158, max=31141, avg=9934.81, stdev=2883.39 00:34:52.341 clat (usec): min=40846, max=44523, avg=41255.48, stdev=493.32 00:34:52.341 lat (usec): min=40854, max=44541, avg=41265.42, stdev=493.70 00:34:52.341 clat percentiles (usec): 00:34:52.341 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:52.341 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:52.341 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:52.341 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:34:52.341 | 99.99th=[44303] 00:34:52.341 bw ( KiB/s): min= 352, max= 416, per=33.99%, avg=387.20, stdev=14.31, samples=20 00:34:52.341 iops : min= 88, max= 104, avg=96.80, stdev= 3.58, samples=20 00:34:52.341 lat (msec) : 50=100.00% 00:34:52.341 cpu : usr=94.48%, sys=5.21%, ctx=12, majf=0, minf=153 00:34:52.341 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.341 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.341 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:52.341 filename1: (groupid=0, jobs=1): err= 0: pid=2432697: Fri Jul 26 02:10:33 2024 00:34:52.341 read: IOPS=187, BW=751KiB/s (769kB/s)(7536KiB/10033msec) 00:34:52.341 slat (usec): min=7, max=101, avg=10.30, stdev= 4.05 00:34:52.341 clat (usec): min=685, max=44532, avg=21268.87, stdev=20282.20 00:34:52.341 lat (usec): min=693, max=44567, avg=21279.17, stdev=20282.54 00:34:52.341 clat percentiles (usec): 00:34:52.341 | 1.00th=[ 701], 5.00th=[ 717], 10.00th=[ 734], 20.00th=[ 742], 00:34:52.341 | 30.00th=[ 758], 40.00th=[ 881], 50.00th=[41157], 60.00th=[41157], 00:34:52.341 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:34:52.341 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:34:52.341 | 99.99th=[44303] 00:34:52.342 bw ( KiB/s): min= 640, max= 768, per=66.04%, avg=752.00, stdev=35.21, samples=20 00:34:52.342 iops : min= 160, max= 192, avg=188.00, stdev= 8.80, samples=20 00:34:52.342 lat (usec) : 750=25.42%, 1000=23.41% 00:34:52.342 lat (msec) : 2=0.64%, 50=50.53% 00:34:52.342 cpu : usr=94.11%, sys=5.60%, ctx=13, majf=0, minf=160 00:34:52.342 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.342 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.342 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:52.342 00:34:52.342 Run status group 0 (all jobs): 00:34:52.342 READ: bw=1139KiB/s (1166kB/s), 388KiB/s-751KiB/s (397kB/s-769kB/s), io=11.2MiB (11.7MB), run=10033-10033msec 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.342 02:10:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:52.342 02:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.342 00:34:52.342 real 0m11.319s 00:34:52.342 user 0m20.234s 00:34:52.342 sys 0m1.406s 00:34:52.342 02:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:52.342 02:10:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:52.342 ************************************ 00:34:52.342 END TEST fio_dif_1_multi_subsystems 00:34:52.342 ************************************ 00:34:52.342 02:10:34 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:52.342 02:10:34 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:52.342 02:10:34 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:52.342 02:10:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:52.342 ************************************ 00:34:52.342 START TEST fio_dif_rand_params 00:34:52.342 ************************************ 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.342 bdev_null0 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.342 [2024-07-26 02:10:34.074446] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:52.342 { 00:34:52.342 "params": { 00:34:52.342 "name": "Nvme$subsystem", 00:34:52.342 "trtype": "$TEST_TRANSPORT", 00:34:52.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:52.342 "adrfam": "ipv4", 00:34:52.342 "trsvcid": "$NVMF_PORT", 00:34:52.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:52.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:52.342 "hdgst": ${hdgst:-false}, 00:34:52.342 "ddgst": ${ddgst:-false} 00:34:52.342 }, 00:34:52.342 "method": "bdev_nvme_attach_controller" 00:34:52.342 } 00:34:52.342 EOF 00:34:52.342 )") 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:52.342 "params": { 00:34:52.342 "name": "Nvme0", 00:34:52.342 "trtype": "tcp", 00:34:52.342 "traddr": "10.0.0.2", 00:34:52.342 "adrfam": "ipv4", 00:34:52.342 "trsvcid": "4420", 00:34:52.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:52.342 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:52.342 "hdgst": false, 00:34:52.342 "ddgst": false 00:34:52.342 }, 00:34:52.342 "method": "bdev_nvme_attach_controller" 00:34:52.342 }' 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:52.342 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:52.343 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:52.343 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:52.343 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:52.343 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:52.343 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:52.343 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:52.343 02:10:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.343 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:52.343 ... 00:34:52.343 fio-3.35 00:34:52.343 Starting 3 threads 00:34:52.600 EAL: No free 2048 kB hugepages reported on node 1 00:34:57.862 00:34:57.862 filename0: (groupid=0, jobs=1): err= 0: pid=2433984: Fri Jul 26 02:10:39 2024 00:34:57.862 read: IOPS=206, BW=25.8MiB/s (27.0MB/s)(129MiB/5003msec) 00:34:57.862 slat (nsec): min=4475, max=38169, avg=14389.21, stdev=2614.19 00:34:57.862 clat (usec): min=4733, max=56404, avg=14536.98, stdev=12167.27 00:34:57.862 lat (usec): min=4747, max=56418, avg=14551.37, stdev=12167.30 00:34:57.862 clat percentiles (usec): 00:34:57.862 | 1.00th=[ 5276], 5.00th=[ 5538], 10.00th=[ 7177], 20.00th=[ 8356], 00:34:57.862 | 30.00th=[ 9110], 40.00th=[10290], 50.00th=[11469], 60.00th=[12125], 00:34:57.862 | 70.00th=[12911], 80.00th=[14091], 90.00th=[16909], 95.00th=[51119], 00:34:57.862 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55313], 99.95th=[56361], 00:34:57.862 | 99.99th=[56361] 00:34:57.862 bw ( KiB/s): min=17664, max=35840, per=30.95%, avg=26348.10, stdev=4920.56, samples=10 00:34:57.862 iops : min= 138, max= 280, avg=205.80, stdev=38.42, samples=10 00:34:57.862 lat (msec) : 10=37.83%, 20=52.57%, 50=3.59%, 100=6.01% 00:34:57.862 cpu : usr=92.38%, sys=7.10%, ctx=12, majf=0, minf=104 00:34:57.862 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:57.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.862 issued rwts: total=1031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.862 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:57.862 filename0: (groupid=0, jobs=1): err= 0: pid=2433985: Fri Jul 26 02:10:39 2024 00:34:57.862 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(141MiB/5004msec) 00:34:57.862 slat (nsec): min=4501, max=45551, avg=14052.05, stdev=3435.84 00:34:57.862 clat (usec): min=3958, max=90809, avg=13263.13, stdev=10936.79 00:34:57.862 lat (usec): min=3971, max=90823, avg=13277.18, stdev=10936.68 00:34:57.862 clat percentiles (usec): 00:34:57.862 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5866], 20.00th=[ 8094], 00:34:57.862 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10945], 60.00th=[11731], 00:34:57.862 | 70.00th=[12649], 80.00th=[14091], 90.00th=[16188], 95.00th=[48497], 00:34:57.862 | 99.00th=[52167], 99.50th=[54264], 99.90th=[87557], 99.95th=[90702], 00:34:57.862 | 99.99th=[90702] 00:34:57.862 bw ( KiB/s): min=18432, max=36864, per=33.92%, avg=28876.80, stdev=6351.23, samples=10 00:34:57.862 iops : min= 144, max= 288, avg=225.60, stdev=49.62, samples=10 00:34:57.862 lat (msec) : 4=0.09%, 10=43.10%, 20=49.65%, 50=4.42%, 100=2.74% 00:34:57.862 cpu : usr=92.80%, sys=6.76%, ctx=7, majf=0, minf=38 00:34:57.862 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:57.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.862 issued rwts: total=1130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.862 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:57.862 filename0: (groupid=0, jobs=1): err= 0: pid=2433986: Fri Jul 26 02:10:39 2024 00:34:57.862 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(146MiB/5002msec) 00:34:57.862 slat (nsec): min=4399, max=68689, avg=13766.17, stdev=2787.42 00:34:57.862 clat (usec): min=4456, max=88888, avg=12839.82, stdev=10519.27 00:34:57.862 lat (usec): min=4469, max=88901, avg=12853.59, stdev=10519.17 00:34:57.862 clat percentiles (usec): 00:34:57.862 | 1.00th=[ 5080], 5.00th=[ 5669], 10.00th=[ 6915], 20.00th=[ 8225], 00:34:57.862 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10683], 60.00th=[11338], 00:34:57.862 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13960], 95.00th=[48497], 00:34:57.862 | 99.00th=[53216], 99.50th=[54789], 99.90th=[54789], 99.95th=[88605], 00:34:57.862 | 99.99th=[88605] 00:34:57.862 bw ( KiB/s): min=22272, max=37632, per=35.03%, avg=29824.00, stdev=4666.49, samples=10 00:34:57.862 iops : min= 174, max= 294, avg=233.00, stdev=36.46, samples=10 00:34:57.862 lat (msec) : 10=43.10%, 20=49.87%, 50=4.03%, 100=3.00% 00:34:57.862 cpu : usr=93.82%, sys=5.72%, ctx=10, majf=0, minf=146 00:34:57.863 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:57.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.863 issued rwts: total=1167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.863 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:57.863 00:34:57.863 Run status group 0 (all jobs): 00:34:57.863 READ: bw=83.1MiB/s (87.2MB/s), 25.8MiB/s-29.2MiB/s (27.0MB/s-30.6MB/s), io=416MiB (436MB), run=5002-5004msec 00:34:58.121 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:58.121 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:58.121 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.121 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:58.121 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:58.121 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:58.121 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.121 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.121 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.121 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:58.121 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.121 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.121 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.122 bdev_null0 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.122 [2024-07-26 02:10:40.113055] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.122 bdev_null1 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.122 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.380 bdev_null2 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.380 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:58.381 { 00:34:58.381 "params": { 00:34:58.381 "name": "Nvme$subsystem", 00:34:58.381 "trtype": "$TEST_TRANSPORT", 00:34:58.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.381 "adrfam": "ipv4", 00:34:58.381 "trsvcid": "$NVMF_PORT", 00:34:58.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.381 "hdgst": ${hdgst:-false}, 00:34:58.381 "ddgst": ${ddgst:-false} 00:34:58.381 }, 00:34:58.381 "method": "bdev_nvme_attach_controller" 00:34:58.381 } 00:34:58.381 EOF 00:34:58.381 )") 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:58.381 { 00:34:58.381 "params": { 00:34:58.381 "name": "Nvme$subsystem", 00:34:58.381 "trtype": "$TEST_TRANSPORT", 00:34:58.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.381 "adrfam": "ipv4", 00:34:58.381 "trsvcid": "$NVMF_PORT", 00:34:58.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.381 "hdgst": ${hdgst:-false}, 00:34:58.381 "ddgst": ${ddgst:-false} 00:34:58.381 }, 00:34:58.381 "method": "bdev_nvme_attach_controller" 00:34:58.381 } 00:34:58.381 EOF 00:34:58.381 )") 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:58.381 { 00:34:58.381 "params": { 00:34:58.381 "name": "Nvme$subsystem", 00:34:58.381 "trtype": "$TEST_TRANSPORT", 00:34:58.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.381 "adrfam": "ipv4", 00:34:58.381 "trsvcid": "$NVMF_PORT", 00:34:58.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.381 "hdgst": ${hdgst:-false}, 00:34:58.381 "ddgst": ${ddgst:-false} 00:34:58.381 }, 00:34:58.381 "method": "bdev_nvme_attach_controller" 00:34:58.381 } 00:34:58.381 EOF 00:34:58.381 )") 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:58.381 "params": { 00:34:58.381 "name": "Nvme0", 00:34:58.381 "trtype": "tcp", 00:34:58.381 "traddr": "10.0.0.2", 00:34:58.381 "adrfam": "ipv4", 00:34:58.381 "trsvcid": "4420", 00:34:58.381 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:58.381 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:58.381 "hdgst": false, 00:34:58.381 "ddgst": false 00:34:58.381 }, 00:34:58.381 "method": "bdev_nvme_attach_controller" 00:34:58.381 },{ 00:34:58.381 "params": { 00:34:58.381 "name": "Nvme1", 00:34:58.381 "trtype": "tcp", 00:34:58.381 "traddr": "10.0.0.2", 00:34:58.381 "adrfam": "ipv4", 00:34:58.381 "trsvcid": "4420", 00:34:58.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:58.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:58.381 "hdgst": false, 00:34:58.381 "ddgst": false 00:34:58.381 }, 00:34:58.381 "method": "bdev_nvme_attach_controller" 00:34:58.381 },{ 00:34:58.381 "params": { 00:34:58.381 "name": "Nvme2", 00:34:58.381 "trtype": "tcp", 00:34:58.381 "traddr": "10.0.0.2", 00:34:58.381 "adrfam": "ipv4", 00:34:58.381 "trsvcid": "4420", 00:34:58.381 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:58.381 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:58.381 "hdgst": false, 00:34:58.381 "ddgst": false 00:34:58.381 }, 00:34:58.381 "method": "bdev_nvme_attach_controller" 00:34:58.381 }' 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:58.381 02:10:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.640 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:58.640 ... 00:34:58.640 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:58.640 ... 00:34:58.640 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:58.640 ... 00:34:58.640 fio-3.35 00:34:58.640 Starting 24 threads 00:34:58.640 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.838 00:35:10.838 filename0: (groupid=0, jobs=1): err= 0: pid=2434843: Fri Jul 26 02:10:51 2024 00:35:10.838 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10006msec) 00:35:10.838 slat (usec): min=8, max=120, avg=32.75, stdev=25.91 00:35:10.838 clat (usec): min=8541, max=37633, avg=33289.46, stdev=1690.38 00:35:10.838 lat (usec): min=8552, max=37690, avg=33322.21, stdev=1689.03 00:35:10.838 clat percentiles (usec): 00:35:10.838 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:35:10.838 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:10.838 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:10.838 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35914], 99.95th=[36439], 00:35:10.838 | 99.99th=[37487] 00:35:10.838 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1906.53, stdev=40.36, samples=19 00:35:10.838 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:35:10.838 lat (msec) : 10=0.34%, 20=0.29%, 50=99.37% 00:35:10.838 cpu : usr=97.40%, sys=2.11%, ctx=35, majf=0, minf=74 00:35:10.838 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:10.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.838 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.838 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.838 filename0: (groupid=0, jobs=1): err= 0: pid=2434844: Fri Jul 26 02:10:51 2024 00:35:10.838 read: IOPS=474, BW=1896KiB/s (1942kB/s)(18.6MiB/10023msec) 00:35:10.838 slat (nsec): min=8542, max=78451, avg=34856.41, stdev=11671.68 00:35:10.838 clat (usec): min=27749, max=42777, avg=33420.48, stdev=726.09 00:35:10.838 lat (usec): min=27813, max=42807, avg=33455.33, stdev=725.27 00:35:10.838 clat percentiles (usec): 00:35:10.838 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:10.838 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:10.838 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:35:10.838 | 99.00th=[34866], 99.50th=[35914], 99.90th=[42730], 99.95th=[42730], 00:35:10.838 | 99.99th=[42730] 00:35:10.838 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1894.40, stdev=52.53, samples=20 00:35:10.838 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:35:10.838 lat (msec) : 50=100.00% 00:35:10.838 cpu : usr=92.46%, sys=4.07%, ctx=284, majf=0, minf=60 00:35:10.838 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.838 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.838 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.838 filename0: (groupid=0, jobs=1): err= 0: pid=2434845: Fri Jul 26 02:10:51 2024 00:35:10.838 read: IOPS=471, BW=1886KiB/s (1932kB/s)(18.4MiB/10009msec) 00:35:10.838 slat (usec): min=8, max=107, avg=41.66, stdev=14.77 00:35:10.838 clat (usec): min=16025, max=85537, avg=33551.18, stdev=3181.11 00:35:10.838 lat (usec): min=16047, max=85556, avg=33592.84, stdev=3182.72 00:35:10.838 clat percentiles (usec): 00:35:10.838 | 1.00th=[28443], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:10.838 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:10.838 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:10.838 | 99.00th=[47973], 99.50th=[49021], 99.90th=[70779], 99.95th=[70779], 00:35:10.838 | 99.99th=[85459] 00:35:10.838 bw ( KiB/s): min= 1667, max= 1920, per=4.13%, avg=1879.74, stdev=82.94, samples=19 00:35:10.839 iops : min= 416, max= 480, avg=469.89, stdev=20.84, samples=19 00:35:10.839 lat (msec) : 20=0.74%, 50=98.92%, 100=0.34% 00:35:10.839 cpu : usr=96.35%, sys=2.21%, ctx=74, majf=0, minf=62 00:35:10.839 IO depths : 1=5.8%, 2=11.7%, 4=24.5%, 8=51.1%, 16=6.9%, 32=0.0%, >=64=0.0% 00:35:10.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.839 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.839 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.839 filename0: (groupid=0, jobs=1): err= 0: pid=2434846: Fri Jul 26 02:10:51 2024 00:35:10.839 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10003msec) 00:35:10.839 slat (nsec): min=8349, max=83163, avg=22875.56, stdev=12770.52 00:35:10.839 clat (usec): min=20956, max=46806, avg=33495.98, stdev=899.76 00:35:10.839 lat (usec): min=21009, max=46854, avg=33518.85, stdev=897.40 00:35:10.839 clat percentiles (usec): 00:35:10.839 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:10.839 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:10.839 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:10.839 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:35:10.839 | 99.99th=[46924] 00:35:10.839 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1899.79, stdev=47.95, samples=19 00:35:10.839 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:35:10.839 lat (msec) : 50=100.00% 00:35:10.839 cpu : usr=97.96%, sys=1.53%, ctx=103, majf=0, minf=73 00:35:10.839 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.839 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.839 filename0: (groupid=0, jobs=1): err= 0: pid=2434847: Fri Jul 26 02:10:51 2024 00:35:10.839 read: IOPS=474, BW=1898KiB/s (1944kB/s)(18.5MiB/10004msec) 00:35:10.839 slat (nsec): min=8035, max=76163, avg=33000.46, stdev=14092.25 00:35:10.839 clat (usec): min=17878, max=50904, avg=33423.89, stdev=1870.00 00:35:10.839 lat (usec): min=17887, max=50926, avg=33456.89, stdev=1869.91 00:35:10.839 clat percentiles (usec): 00:35:10.839 | 1.00th=[26346], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:10.839 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:10.839 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:10.839 | 99.00th=[35914], 99.50th=[47973], 99.90th=[50594], 99.95th=[51119], 00:35:10.839 | 99.99th=[51119] 00:35:10.839 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1898.11, stdev=45.63, samples=19 00:35:10.839 iops : min= 448, max= 480, avg=474.53, stdev=11.41, samples=19 00:35:10.839 lat (msec) : 20=0.34%, 50=99.49%, 100=0.17% 00:35:10.839 cpu : usr=97.65%, sys=1.87%, ctx=25, majf=0, minf=92 00:35:10.839 IO depths : 1=5.5%, 2=11.6%, 4=24.6%, 8=51.3%, 16=7.1%, 32=0.0%, >=64=0.0% 00:35:10.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.839 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.839 issued rwts: total=4748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.839 filename0: (groupid=0, jobs=1): err= 0: pid=2434848: Fri Jul 26 02:10:51 2024 00:35:10.839 read: IOPS=474, BW=1898KiB/s (1943kB/s)(18.6MiB/10016msec) 00:35:10.839 slat (nsec): min=8575, max=90281, avg=35756.95, stdev=12574.69 00:35:10.839 clat (usec): min=17392, max=52061, avg=33428.94, stdev=2598.87 00:35:10.839 lat (usec): min=17401, max=52094, avg=33464.70, stdev=2599.84 00:35:10.839 clat percentiles (usec): 00:35:10.839 | 1.00th=[18482], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:10.839 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:10.839 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:10.839 | 99.00th=[47973], 99.50th=[49021], 99.90th=[50070], 99.95th=[50070], 00:35:10.839 | 99.99th=[52167] 00:35:10.839 bw ( KiB/s): min= 1776, max= 1936, per=4.16%, avg=1894.40, stdev=53.04, samples=20 00:35:10.839 iops : min= 444, max= 484, avg=473.60, stdev=13.26, samples=20 00:35:10.839 lat (msec) : 20=1.18%, 50=98.74%, 100=0.08% 00:35:10.839 cpu : usr=98.11%, sys=1.47%, ctx=17, majf=0, minf=59 00:35:10.839 IO depths : 1=3.9%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:35:10.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.839 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.839 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.839 filename0: (groupid=0, jobs=1): err= 0: pid=2434849: Fri Jul 26 02:10:51 2024 00:35:10.839 read: IOPS=474, BW=1898KiB/s (1944kB/s)(18.6MiB/10013msec) 00:35:10.839 slat (nsec): min=9701, max=84421, avg=39641.26, stdev=13031.94 00:35:10.839 clat (usec): min=16015, max=56381, avg=33347.06, stdev=1722.47 00:35:10.839 lat (usec): min=16041, max=56414, avg=33386.70, stdev=1722.71 00:35:10.839 clat percentiles (usec): 00:35:10.839 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:10.839 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:10.839 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:35:10.839 | 99.00th=[34341], 99.50th=[35390], 99.90th=[56361], 99.95th=[56361], 00:35:10.839 | 99.99th=[56361] 00:35:10.839 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1893.05, stdev=68.52, samples=19 00:35:10.839 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:35:10.839 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:35:10.839 cpu : usr=98.23%, sys=1.37%, ctx=20, majf=0, minf=41 00:35:10.839 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.839 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.839 filename0: (groupid=0, jobs=1): err= 0: pid=2434850: Fri Jul 26 02:10:51 2024 00:35:10.839 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10011msec) 00:35:10.839 slat (nsec): min=9926, max=81340, avg=34043.19, stdev=10572.09 00:35:10.839 clat (usec): min=13711, max=57733, avg=33388.24, stdev=1870.75 00:35:10.839 lat (usec): min=13744, max=57762, avg=33422.29, stdev=1870.24 00:35:10.839 clat percentiles (usec): 00:35:10.839 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:10.839 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:10.839 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:35:10.839 | 99.00th=[34866], 99.50th=[35914], 99.90th=[57410], 99.95th=[57934], 00:35:10.839 | 99.99th=[57934] 00:35:10.839 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1893.05, stdev=68.52, samples=19 00:35:10.839 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:35:10.839 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:35:10.839 cpu : usr=96.41%, sys=2.36%, ctx=147, majf=0, minf=48 00:35:10.839 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.839 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.839 filename1: (groupid=0, jobs=1): err= 0: pid=2434851: Fri Jul 26 02:10:51 2024 00:35:10.839 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10012msec) 00:35:10.839 slat (nsec): min=9068, max=80836, avg=32131.38, stdev=10242.77 00:35:10.839 clat (usec): min=28025, max=39364, avg=33444.15, stdev=582.89 00:35:10.839 lat (usec): min=28068, max=39398, avg=33476.29, stdev=581.40 00:35:10.839 clat percentiles (usec): 00:35:10.839 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:10.839 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:10.839 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:10.839 | 99.00th=[34866], 99.50th=[35390], 99.90th=[39060], 99.95th=[39060], 00:35:10.839 | 99.99th=[39584] 00:35:10.839 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1894.40, stdev=52.53, samples=20 00:35:10.839 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:35:10.839 lat (msec) : 50=100.00% 00:35:10.839 cpu : usr=98.38%, sys=1.21%, ctx=18, majf=0, minf=76 00:35:10.839 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.839 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.839 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.839 filename1: (groupid=0, jobs=1): err= 0: pid=2434852: Fri Jul 26 02:10:51 2024 00:35:10.839 read: IOPS=476, BW=1908KiB/s (1953kB/s)(18.6MiB/10010msec) 00:35:10.839 slat (nsec): min=8209, max=95541, avg=37873.32, stdev=19816.64 00:35:10.839 clat (usec): min=10482, max=57670, avg=33251.86, stdev=2902.05 00:35:10.839 lat (usec): min=10503, max=57703, avg=33289.73, stdev=2902.63 00:35:10.839 clat percentiles (usec): 00:35:10.839 | 1.00th=[18482], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:35:10.839 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:10.839 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:10.839 | 99.00th=[41681], 99.50th=[54789], 99.90th=[57410], 99.95th=[57410], 00:35:10.839 | 99.99th=[57410] 00:35:10.839 bw ( KiB/s): min= 1664, max= 2016, per=4.18%, avg=1902.32, stdev=70.33, samples=19 00:35:10.839 iops : min= 416, max= 504, avg=475.58, stdev=17.58, samples=19 00:35:10.839 lat (msec) : 20=1.26%, 50=98.16%, 100=0.59% 00:35:10.839 cpu : usr=96.43%, sys=2.16%, ctx=295, majf=0, minf=66 00:35:10.839 IO depths : 1=2.8%, 2=7.4%, 4=18.8%, 8=59.8%, 16=11.2%, 32=0.0%, >=64=0.0% 00:35:10.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.839 complete : 0=0.0%, 4=93.0%, 8=2.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.840 issued rwts: total=4774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.840 filename1: (groupid=0, jobs=1): err= 0: pid=2434853: Fri Jul 26 02:10:51 2024 00:35:10.840 read: IOPS=474, BW=1898KiB/s (1943kB/s)(18.6MiB/10016msec) 00:35:10.840 slat (usec): min=15, max=115, avg=39.09, stdev=16.39 00:35:10.840 clat (usec): min=21219, max=55734, avg=33323.46, stdev=890.59 00:35:10.840 lat (usec): min=21282, max=55754, avg=33362.55, stdev=891.91 00:35:10.840 clat percentiles (usec): 00:35:10.840 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:35:10.840 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:10.840 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:35:10.840 | 99.00th=[34866], 99.50th=[35914], 99.90th=[42730], 99.95th=[42730], 00:35:10.840 | 99.99th=[55837] 00:35:10.840 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1894.40, stdev=52.53, samples=20 00:35:10.840 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:35:10.840 lat (msec) : 50=99.96%, 100=0.04% 00:35:10.840 cpu : usr=98.28%, sys=1.27%, ctx=19, majf=0, minf=43 00:35:10.840 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.840 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.840 filename1: (groupid=0, jobs=1): err= 0: pid=2434854: Fri Jul 26 02:10:51 2024 00:35:10.840 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10003msec) 00:35:10.840 slat (nsec): min=8437, max=77825, avg=28317.31, stdev=12189.66 00:35:10.840 clat (usec): min=21955, max=36235, avg=33450.08, stdev=763.35 00:35:10.840 lat (usec): min=21982, max=36257, avg=33478.40, stdev=761.09 00:35:10.840 clat percentiles (usec): 00:35:10.840 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:10.840 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:10.840 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:10.840 | 99.00th=[34866], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:35:10.840 | 99.99th=[36439] 00:35:10.840 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1899.79, stdev=47.95, samples=19 00:35:10.840 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:35:10.840 lat (msec) : 50=100.00% 00:35:10.840 cpu : usr=97.95%, sys=1.57%, ctx=47, majf=0, minf=47 00:35:10.840 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.840 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.840 filename1: (groupid=0, jobs=1): err= 0: pid=2434855: Fri Jul 26 02:10:51 2024 00:35:10.840 read: IOPS=474, BW=1898KiB/s (1943kB/s)(18.6MiB/10016msec) 00:35:10.840 slat (usec): min=15, max=117, avg=38.39, stdev=16.78 00:35:10.840 clat (usec): min=27655, max=42954, avg=33324.59, stdev=749.97 00:35:10.840 lat (usec): min=27679, max=42973, avg=33362.98, stdev=751.94 00:35:10.840 clat percentiles (usec): 00:35:10.840 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:35:10.840 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:10.840 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:35:10.840 | 99.00th=[34866], 99.50th=[35914], 99.90th=[42730], 99.95th=[42730], 00:35:10.840 | 99.99th=[42730] 00:35:10.840 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1894.40, stdev=52.53, samples=20 00:35:10.840 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:35:10.840 lat (msec) : 50=100.00% 00:35:10.840 cpu : usr=98.34%, sys=1.21%, ctx=18, majf=0, minf=47 00:35:10.840 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.840 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.840 filename1: (groupid=0, jobs=1): err= 0: pid=2434856: Fri Jul 26 02:10:51 2024 00:35:10.840 read: IOPS=477, BW=1910KiB/s (1955kB/s)(18.7MiB/10021msec) 00:35:10.840 slat (usec): min=8, max=105, avg=22.56, stdev=21.22 00:35:10.840 clat (usec): min=8554, max=35775, avg=33309.75, stdev=1890.70 00:35:10.840 lat (usec): min=8564, max=35792, avg=33332.31, stdev=1889.45 00:35:10.840 clat percentiles (usec): 00:35:10.840 | 1.00th=[20055], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:35:10.840 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:10.840 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:35:10.840 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:35:10.840 | 99.99th=[35914] 00:35:10.840 bw ( KiB/s): min= 1792, max= 2048, per=4.19%, avg=1907.20, stdev=57.24, samples=20 00:35:10.840 iops : min= 448, max= 512, avg=476.80, stdev=14.31, samples=20 00:35:10.840 lat (msec) : 10=0.33%, 20=0.67%, 50=99.00% 00:35:10.840 cpu : usr=97.71%, sys=1.84%, ctx=38, majf=0, minf=49 00:35:10.840 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.840 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.840 filename1: (groupid=0, jobs=1): err= 0: pid=2434857: Fri Jul 26 02:10:51 2024 00:35:10.840 read: IOPS=474, BW=1898KiB/s (1944kB/s)(18.6MiB/10014msec) 00:35:10.840 slat (nsec): min=11589, max=78965, avg=38201.86, stdev=11657.75 00:35:10.840 clat (usec): min=15866, max=57220, avg=33364.08, stdev=1767.04 00:35:10.840 lat (usec): min=15900, max=57249, avg=33402.28, stdev=1766.89 00:35:10.840 clat percentiles (usec): 00:35:10.840 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:10.840 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:10.840 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:35:10.840 | 99.00th=[34341], 99.50th=[35390], 99.90th=[56886], 99.95th=[57410], 00:35:10.840 | 99.99th=[57410] 00:35:10.840 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1894.55, stdev=66.42, samples=20 00:35:10.840 iops : min= 416, max= 480, avg=473.60, stdev=16.74, samples=20 00:35:10.840 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:35:10.840 cpu : usr=96.62%, sys=2.12%, ctx=87, majf=0, minf=55 00:35:10.840 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.840 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.840 filename1: (groupid=0, jobs=1): err= 0: pid=2434858: Fri Jul 26 02:10:51 2024 00:35:10.840 read: IOPS=473, BW=1893KiB/s (1939kB/s)(18.5MiB/10006msec) 00:35:10.840 slat (nsec): min=7671, max=80389, avg=38928.41, stdev=12269.65 00:35:10.840 clat (usec): min=27978, max=64264, avg=33448.03, stdev=1860.43 00:35:10.840 lat (usec): min=28004, max=64284, avg=33486.96, stdev=1859.08 00:35:10.840 clat percentiles (usec): 00:35:10.840 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:10.840 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:10.840 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:35:10.840 | 99.00th=[34866], 99.50th=[35390], 99.90th=[64226], 99.95th=[64226], 00:35:10.840 | 99.99th=[64226] 00:35:10.840 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1893.05, stdev=68.52, samples=19 00:35:10.840 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:35:10.840 lat (msec) : 50=99.66%, 100=0.34% 00:35:10.840 cpu : usr=97.86%, sys=1.66%, ctx=23, majf=0, minf=63 00:35:10.840 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:10.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.840 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.840 filename2: (groupid=0, jobs=1): err= 0: pid=2434859: Fri Jul 26 02:10:51 2024 00:35:10.840 read: IOPS=474, BW=1898KiB/s (1943kB/s)(18.6MiB/10016msec) 00:35:10.840 slat (nsec): min=13357, max=95830, avg=37170.76, stdev=11651.87 00:35:10.840 clat (usec): min=27650, max=42880, avg=33390.63, stdev=741.89 00:35:10.840 lat (usec): min=27686, max=42901, avg=33427.80, stdev=741.12 00:35:10.840 clat percentiles (usec): 00:35:10.840 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:10.840 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:10.840 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:35:10.840 | 99.00th=[34866], 99.50th=[35914], 99.90th=[42730], 99.95th=[42730], 00:35:10.840 | 99.99th=[42730] 00:35:10.840 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1894.40, stdev=52.53, samples=20 00:35:10.840 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:35:10.840 lat (msec) : 50=100.00% 00:35:10.840 cpu : usr=94.68%, sys=3.32%, ctx=321, majf=0, minf=46 00:35:10.840 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.840 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.840 filename2: (groupid=0, jobs=1): err= 0: pid=2434860: Fri Jul 26 02:10:51 2024 00:35:10.840 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.6MiB/10019msec) 00:35:10.840 slat (usec): min=6, max=102, avg=26.01, stdev=23.90 00:35:10.840 clat (usec): min=17505, max=52973, avg=33498.13, stdev=2453.91 00:35:10.840 lat (usec): min=17517, max=52992, avg=33524.14, stdev=2453.05 00:35:10.840 clat percentiles (usec): 00:35:10.840 | 1.00th=[23725], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:35:10.840 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:10.841 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:10.841 | 99.00th=[47973], 99.50th=[48497], 99.90th=[52691], 99.95th=[52691], 00:35:10.841 | 99.99th=[53216] 00:35:10.841 bw ( KiB/s): min= 1792, max= 1936, per=4.16%, avg=1892.50, stdev=52.50, samples=20 00:35:10.841 iops : min= 448, max= 484, avg=473.10, stdev=13.13, samples=20 00:35:10.841 lat (msec) : 20=0.97%, 50=98.70%, 100=0.34% 00:35:10.841 cpu : usr=97.45%, sys=1.68%, ctx=116, majf=0, minf=83 00:35:10.841 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:35:10.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.841 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.841 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.841 filename2: (groupid=0, jobs=1): err= 0: pid=2434861: Fri Jul 26 02:10:51 2024 00:35:10.841 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10005msec) 00:35:10.841 slat (nsec): min=8300, max=86486, avg=30808.29, stdev=14612.82 00:35:10.841 clat (usec): min=8505, max=35804, avg=33331.16, stdev=1696.62 00:35:10.841 lat (usec): min=8516, max=35829, avg=33361.97, stdev=1695.49 00:35:10.841 clat percentiles (usec): 00:35:10.841 | 1.00th=[32113], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:10.841 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:10.841 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:10.841 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:35:10.841 | 99.99th=[35914] 00:35:10.841 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1906.53, stdev=40.36, samples=19 00:35:10.841 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:35:10.841 lat (msec) : 10=0.34%, 20=0.29%, 50=99.37% 00:35:10.841 cpu : usr=92.52%, sys=4.08%, ctx=508, majf=0, minf=63 00:35:10.841 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.841 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.841 filename2: (groupid=0, jobs=1): err= 0: pid=2434862: Fri Jul 26 02:10:51 2024 00:35:10.841 read: IOPS=474, BW=1898KiB/s (1944kB/s)(18.6MiB/10015msec) 00:35:10.841 slat (nsec): min=6781, max=83926, avg=38276.70, stdev=11544.53 00:35:10.841 clat (usec): min=15917, max=58745, avg=33376.66, stdev=1847.81 00:35:10.841 lat (usec): min=15966, max=58764, avg=33414.94, stdev=1846.95 00:35:10.841 clat percentiles (usec): 00:35:10.841 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:10.841 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:10.841 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:35:10.841 | 99.00th=[34866], 99.50th=[35390], 99.90th=[58459], 99.95th=[58983], 00:35:10.841 | 99.99th=[58983] 00:35:10.841 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1894.40, stdev=66.96, samples=20 00:35:10.841 iops : min= 416, max= 480, avg=473.60, stdev=16.74, samples=20 00:35:10.841 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:35:10.841 cpu : usr=98.06%, sys=1.45%, ctx=27, majf=0, minf=48 00:35:10.841 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.841 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.841 filename2: (groupid=0, jobs=1): err= 0: pid=2434863: Fri Jul 26 02:10:51 2024 00:35:10.841 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10010msec) 00:35:10.841 slat (usec): min=10, max=104, avg=44.40, stdev=15.93 00:35:10.841 clat (usec): min=15973, max=53353, avg=33298.33, stdev=1606.10 00:35:10.841 lat (usec): min=15994, max=53395, avg=33342.73, stdev=1605.19 00:35:10.841 clat percentiles (usec): 00:35:10.841 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:35:10.841 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:10.841 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:35:10.841 | 99.00th=[34341], 99.50th=[35390], 99.90th=[53216], 99.95th=[53216], 00:35:10.841 | 99.99th=[53216] 00:35:10.841 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1893.05, stdev=53.61, samples=19 00:35:10.841 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:35:10.841 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:35:10.841 cpu : usr=98.20%, sys=1.39%, ctx=16, majf=0, minf=56 00:35:10.841 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.841 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.841 filename2: (groupid=0, jobs=1): err= 0: pid=2434864: Fri Jul 26 02:10:51 2024 00:35:10.841 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.6MiB/10010msec) 00:35:10.841 slat (nsec): min=9590, max=94829, avg=37859.42, stdev=14105.19 00:35:10.841 clat (usec): min=13616, max=78162, avg=33272.23, stdev=2612.66 00:35:10.841 lat (usec): min=13626, max=78190, avg=33310.09, stdev=2611.64 00:35:10.841 clat percentiles (usec): 00:35:10.841 | 1.00th=[25822], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:35:10.841 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:10.841 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:10.841 | 99.00th=[40109], 99.50th=[53740], 99.90th=[56886], 99.95th=[56886], 00:35:10.841 | 99.99th=[78119] 00:35:10.841 bw ( KiB/s): min= 1731, max= 1968, per=4.17%, avg=1899.11, stdev=58.92, samples=19 00:35:10.841 iops : min= 432, max= 492, avg=474.74, stdev=14.85, samples=19 00:35:10.841 lat (msec) : 20=0.63%, 50=98.87%, 100=0.50% 00:35:10.841 cpu : usr=95.68%, sys=2.55%, ctx=198, majf=0, minf=59 00:35:10.841 IO depths : 1=5.8%, 2=11.7%, 4=23.8%, 8=51.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:10.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.841 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.841 issued rwts: total=4766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.841 filename2: (groupid=0, jobs=1): err= 0: pid=2434865: Fri Jul 26 02:10:51 2024 00:35:10.841 read: IOPS=475, BW=1902KiB/s (1948kB/s)(18.6MiB/10028msec) 00:35:10.841 slat (nsec): min=9363, max=92303, avg=36291.08, stdev=12541.78 00:35:10.841 clat (usec): min=18842, max=36051, avg=33349.43, stdev=970.17 00:35:10.841 lat (usec): min=18873, max=36119, avg=33385.72, stdev=969.17 00:35:10.841 clat percentiles (usec): 00:35:10.841 | 1.00th=[32113], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:10.841 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:10.841 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:10.841 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:35:10.841 | 99.99th=[35914] 00:35:10.841 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1900.80, stdev=46.89, samples=20 00:35:10.841 iops : min= 448, max= 480, avg=475.20, stdev=11.72, samples=20 00:35:10.841 lat (msec) : 20=0.34%, 50=99.66% 00:35:10.841 cpu : usr=95.96%, sys=2.38%, ctx=105, majf=0, minf=61 00:35:10.841 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.841 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.841 filename2: (groupid=0, jobs=1): err= 0: pid=2434866: Fri Jul 26 02:10:51 2024 00:35:10.841 read: IOPS=474, BW=1898KiB/s (1943kB/s)(18.6MiB/10016msec) 00:35:10.841 slat (nsec): min=9413, max=79475, avg=32828.42, stdev=11463.73 00:35:10.841 clat (usec): min=27673, max=42715, avg=33452.84, stdev=727.28 00:35:10.841 lat (usec): min=27725, max=42749, avg=33485.67, stdev=725.56 00:35:10.841 clat percentiles (usec): 00:35:10.841 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:10.841 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:10.841 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:10.841 | 99.00th=[34866], 99.50th=[35914], 99.90th=[42730], 99.95th=[42730], 00:35:10.841 | 99.99th=[42730] 00:35:10.841 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1894.40, stdev=52.53, samples=20 00:35:10.841 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:35:10.841 lat (msec) : 50=100.00% 00:35:10.841 cpu : usr=95.54%, sys=2.81%, ctx=154, majf=0, minf=87 00:35:10.841 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.841 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.841 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.841 00:35:10.841 Run status group 0 (all jobs): 00:35:10.841 READ: bw=44.5MiB/s (46.6MB/s), 1886KiB/s-1910KiB/s (1932kB/s-1955kB/s), io=446MiB (467MB), run=10003-10028msec 00:35:10.841 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:10.841 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:10.841 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.841 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:10.841 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:10.841 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:10.841 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.841 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.841 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.841 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 bdev_null0 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.842 02:10:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 [2024-07-26 02:10:52.016208] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 bdev_null1 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:10.842 { 00:35:10.842 "params": { 00:35:10.842 "name": "Nvme$subsystem", 00:35:10.842 "trtype": "$TEST_TRANSPORT", 00:35:10.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.842 "adrfam": "ipv4", 00:35:10.842 "trsvcid": "$NVMF_PORT", 00:35:10.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.842 "hdgst": ${hdgst:-false}, 00:35:10.842 "ddgst": ${ddgst:-false} 00:35:10.842 }, 00:35:10.842 "method": "bdev_nvme_attach_controller" 00:35:10.842 } 00:35:10.842 EOF 00:35:10.842 )") 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:10.842 { 00:35:10.842 "params": { 00:35:10.842 "name": "Nvme$subsystem", 00:35:10.842 "trtype": "$TEST_TRANSPORT", 00:35:10.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.842 "adrfam": "ipv4", 00:35:10.842 "trsvcid": "$NVMF_PORT", 00:35:10.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.842 "hdgst": ${hdgst:-false}, 00:35:10.842 "ddgst": ${ddgst:-false} 00:35:10.842 }, 00:35:10.842 "method": "bdev_nvme_attach_controller" 00:35:10.842 } 00:35:10.842 EOF 00:35:10.842 )") 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:10.842 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:10.843 02:10:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.843 02:10:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:10.843 02:10:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:10.843 02:10:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:10.843 "params": { 00:35:10.843 "name": "Nvme0", 00:35:10.843 "trtype": "tcp", 00:35:10.843 "traddr": "10.0.0.2", 00:35:10.843 "adrfam": "ipv4", 00:35:10.843 "trsvcid": "4420", 00:35:10.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:10.843 "hdgst": false, 00:35:10.843 "ddgst": false 00:35:10.843 }, 00:35:10.843 "method": "bdev_nvme_attach_controller" 00:35:10.843 },{ 00:35:10.843 "params": { 00:35:10.843 "name": "Nvme1", 00:35:10.843 "trtype": "tcp", 00:35:10.843 "traddr": "10.0.0.2", 00:35:10.843 "adrfam": "ipv4", 00:35:10.843 "trsvcid": "4420", 00:35:10.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:10.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:10.843 "hdgst": false, 00:35:10.843 "ddgst": false 00:35:10.843 }, 00:35:10.843 "method": "bdev_nvme_attach_controller" 00:35:10.843 }' 00:35:10.843 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:10.843 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:10.843 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.843 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.843 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:10.843 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:10.843 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:10.843 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:10.843 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:10.843 02:10:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.843 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:10.843 ... 00:35:10.843 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:10.843 ... 00:35:10.843 fio-3.35 00:35:10.843 Starting 4 threads 00:35:10.843 EAL: No free 2048 kB hugepages reported on node 1 00:35:17.394 00:35:17.394 filename0: (groupid=0, jobs=1): err= 0: pid=2436249: Fri Jul 26 02:10:58 2024 00:35:17.394 read: IOPS=1926, BW=15.0MiB/s (15.8MB/s)(75.3MiB/5002msec) 00:35:17.394 slat (nsec): min=3902, max=59254, avg=12141.15, stdev=4548.54 00:35:17.394 clat (usec): min=883, max=10727, avg=4115.66, stdev=685.45 00:35:17.394 lat (usec): min=896, max=10739, avg=4127.80, stdev=685.03 00:35:17.394 clat percentiles (usec): 00:35:17.394 | 1.00th=[ 2835], 5.00th=[ 3359], 10.00th=[ 3556], 20.00th=[ 3720], 00:35:17.394 | 30.00th=[ 3818], 40.00th=[ 3884], 50.00th=[ 3982], 60.00th=[ 4047], 00:35:17.394 | 70.00th=[ 4146], 80.00th=[ 4359], 90.00th=[ 5080], 95.00th=[ 5735], 00:35:17.394 | 99.00th=[ 6325], 99.50th=[ 6587], 99.90th=[ 7373], 99.95th=[ 8029], 00:35:17.394 | 99.99th=[10683] 00:35:17.394 bw ( KiB/s): min=15008, max=15872, per=24.88%, avg=15404.60, stdev=293.65, samples=10 00:35:17.394 iops : min= 1876, max= 1984, avg=1925.50, stdev=36.78, samples=10 00:35:17.394 lat (usec) : 1000=0.02% 00:35:17.394 lat (msec) : 2=0.06%, 4=52.99%, 10=46.92%, 20=0.01% 00:35:17.394 cpu : usr=95.42%, sys=4.08%, ctx=8, majf=0, minf=9 00:35:17.394 IO depths : 1=0.1%, 2=2.2%, 4=70.3%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:17.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.394 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.394 issued rwts: total=9634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.394 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:17.394 filename0: (groupid=0, jobs=1): err= 0: pid=2436250: Fri Jul 26 02:10:58 2024 00:35:17.394 read: IOPS=1939, BW=15.2MiB/s (15.9MB/s)(75.8MiB/5003msec) 00:35:17.394 slat (nsec): min=3902, max=60062, avg=12815.56, stdev=5862.99 00:35:17.394 clat (usec): min=1172, max=8483, avg=4085.22, stdev=734.78 00:35:17.394 lat (usec): min=1186, max=8495, avg=4098.03, stdev=734.45 00:35:17.394 clat percentiles (usec): 00:35:17.394 | 1.00th=[ 2966], 5.00th=[ 3294], 10.00th=[ 3425], 20.00th=[ 3621], 00:35:17.394 | 30.00th=[ 3720], 40.00th=[ 3818], 50.00th=[ 3916], 60.00th=[ 4015], 00:35:17.394 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 5473], 95.00th=[ 5800], 00:35:17.394 | 99.00th=[ 6259], 99.50th=[ 6390], 99.90th=[ 6915], 99.95th=[ 8455], 00:35:17.394 | 99.99th=[ 8455] 00:35:17.394 bw ( KiB/s): min=14864, max=16096, per=25.06%, avg=15516.70, stdev=382.91, samples=10 00:35:17.394 iops : min= 1858, max= 2012, avg=1939.50, stdev=47.83, samples=10 00:35:17.394 lat (msec) : 2=0.01%, 4=58.00%, 10=41.99% 00:35:17.394 cpu : usr=95.64%, sys=3.84%, ctx=27, majf=0, minf=0 00:35:17.394 IO depths : 1=0.1%, 2=1.1%, 4=71.6%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:17.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.394 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.394 issued rwts: total=9704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.394 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:17.394 filename1: (groupid=0, jobs=1): err= 0: pid=2436251: Fri Jul 26 02:10:58 2024 00:35:17.394 read: IOPS=1938, BW=15.1MiB/s (15.9MB/s)(75.8MiB/5003msec) 00:35:17.394 slat (nsec): min=3700, max=60842, avg=13252.52, stdev=5203.77 00:35:17.394 clat (usec): min=989, max=11350, avg=4085.06, stdev=748.80 00:35:17.394 lat (usec): min=1005, max=11376, avg=4098.31, stdev=748.00 00:35:17.394 clat percentiles (usec): 00:35:17.394 | 1.00th=[ 2835], 5.00th=[ 3261], 10.00th=[ 3458], 20.00th=[ 3621], 00:35:17.394 | 30.00th=[ 3720], 40.00th=[ 3851], 50.00th=[ 3949], 60.00th=[ 4015], 00:35:17.394 | 70.00th=[ 4080], 80.00th=[ 4293], 90.00th=[ 5407], 95.00th=[ 5735], 00:35:17.394 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 7177], 99.95th=[ 9241], 00:35:17.394 | 99.99th=[11338] 00:35:17.394 bw ( KiB/s): min=15216, max=15824, per=25.05%, avg=15505.60, stdev=203.29, samples=10 00:35:17.394 iops : min= 1902, max= 1978, avg=1938.20, stdev=25.41, samples=10 00:35:17.394 lat (usec) : 1000=0.01% 00:35:17.394 lat (msec) : 2=0.08%, 4=56.96%, 10=42.93%, 20=0.01% 00:35:17.394 cpu : usr=94.50%, sys=4.56%, ctx=211, majf=0, minf=0 00:35:17.394 IO depths : 1=0.1%, 2=2.8%, 4=69.8%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:17.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.394 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.394 issued rwts: total=9699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.394 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:17.394 filename1: (groupid=0, jobs=1): err= 0: pid=2436252: Fri Jul 26 02:10:58 2024 00:35:17.394 read: IOPS=1934, BW=15.1MiB/s (15.8MB/s)(75.6MiB/5002msec) 00:35:17.394 slat (nsec): min=3867, max=54541, avg=12892.71, stdev=5166.22 00:35:17.394 clat (usec): min=803, max=10708, avg=4094.07, stdev=618.09 00:35:17.394 lat (usec): min=816, max=10720, avg=4106.97, stdev=617.33 00:35:17.394 clat percentiles (usec): 00:35:17.394 | 1.00th=[ 3032], 5.00th=[ 3490], 10.00th=[ 3621], 20.00th=[ 3720], 00:35:17.394 | 30.00th=[ 3818], 40.00th=[ 3884], 50.00th=[ 3949], 60.00th=[ 4047], 00:35:17.394 | 70.00th=[ 4113], 80.00th=[ 4293], 90.00th=[ 4752], 95.00th=[ 5669], 00:35:17.394 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 7177], 99.95th=[ 7504], 00:35:17.394 | 99.99th=[10683] 00:35:17.394 bw ( KiB/s): min=15120, max=16000, per=25.06%, avg=15511.11, stdev=303.38, samples=9 00:35:17.394 iops : min= 1890, max= 2000, avg=1938.89, stdev=37.92, samples=9 00:35:17.394 lat (usec) : 1000=0.01% 00:35:17.394 lat (msec) : 2=0.05%, 4=55.24%, 10=44.69%, 20=0.01% 00:35:17.394 cpu : usr=94.34%, sys=4.78%, ctx=168, majf=0, minf=9 00:35:17.394 IO depths : 1=0.1%, 2=1.6%, 4=71.8%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:17.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.394 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.394 issued rwts: total=9678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.394 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:17.394 00:35:17.394 Run status group 0 (all jobs): 00:35:17.394 READ: bw=60.5MiB/s (63.4MB/s), 15.0MiB/s-15.2MiB/s (15.8MB/s-15.9MB/s), io=302MiB (317MB), run=5002-5003msec 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.394 02:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.395 02:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.395 00:35:17.395 real 0m24.362s 00:35:17.395 user 4m30.412s 00:35:17.395 sys 0m7.811s 00:35:17.395 02:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:17.395 02:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.395 ************************************ 00:35:17.395 END TEST fio_dif_rand_params 00:35:17.395 ************************************ 00:35:17.395 02:10:58 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:17.395 02:10:58 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:17.395 02:10:58 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:17.395 02:10:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:17.395 ************************************ 00:35:17.395 START TEST fio_dif_digest 00:35:17.395 ************************************ 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:17.395 bdev_null0 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:17.395 [2024-07-26 02:10:58.486335] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:17.395 { 00:35:17.395 "params": { 00:35:17.395 "name": "Nvme$subsystem", 00:35:17.395 "trtype": "$TEST_TRANSPORT", 00:35:17.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:17.395 "adrfam": "ipv4", 00:35:17.395 "trsvcid": "$NVMF_PORT", 00:35:17.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:17.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:17.395 "hdgst": ${hdgst:-false}, 00:35:17.395 "ddgst": ${ddgst:-false} 00:35:17.395 }, 00:35:17.395 "method": "bdev_nvme_attach_controller" 00:35:17.395 } 00:35:17.395 EOF 00:35:17.395 )") 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:17.395 "params": { 00:35:17.395 "name": "Nvme0", 00:35:17.395 "trtype": "tcp", 00:35:17.395 "traddr": "10.0.0.2", 00:35:17.395 "adrfam": "ipv4", 00:35:17.395 "trsvcid": "4420", 00:35:17.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:17.395 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:17.395 "hdgst": true, 00:35:17.395 "ddgst": true 00:35:17.395 }, 00:35:17.395 "method": "bdev_nvme_attach_controller" 00:35:17.395 }' 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:17.395 02:10:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:17.395 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:17.395 ... 00:35:17.395 fio-3.35 00:35:17.395 Starting 3 threads 00:35:17.395 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.356 00:35:27.356 filename0: (groupid=0, jobs=1): err= 0: pid=2437084: Fri Jul 26 02:11:09 2024 00:35:27.356 read: IOPS=215, BW=27.0MiB/s (28.3MB/s)(271MiB/10044msec) 00:35:27.356 slat (nsec): min=4769, max=62449, avg=22598.14, stdev=6105.69 00:35:27.356 clat (usec): min=7512, max=52599, avg=13850.74, stdev=1583.43 00:35:27.356 lat (usec): min=7527, max=52618, avg=13873.34, stdev=1583.21 00:35:27.356 clat percentiles (usec): 00:35:27.356 | 1.00th=[ 9896], 5.00th=[12125], 10.00th=[12649], 20.00th=[13042], 00:35:27.356 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:35:27.356 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15008], 95.00th=[15270], 00:35:27.356 | 99.00th=[16057], 99.50th=[16450], 99.90th=[21365], 99.95th=[50594], 00:35:27.356 | 99.99th=[52691] 00:35:27.356 bw ( KiB/s): min=26624, max=29440, per=35.10%, avg=27724.80, stdev=638.52, samples=20 00:35:27.356 iops : min= 208, max= 230, avg=216.60, stdev= 4.99, samples=20 00:35:27.356 lat (msec) : 10=1.29%, 20=98.57%, 50=0.05%, 100=0.09% 00:35:27.356 cpu : usr=93.69%, sys=5.37%, ctx=27, majf=0, minf=152 00:35:27.356 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.356 issued rwts: total=2168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.356 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:27.356 filename0: (groupid=0, jobs=1): err= 0: pid=2437085: Fri Jul 26 02:11:09 2024 00:35:27.356 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(249MiB/10044msec) 00:35:27.356 slat (nsec): min=4881, max=76156, avg=16640.77, stdev=5270.08 00:35:27.356 clat (usec): min=9465, max=57888, avg=15092.94, stdev=3189.73 00:35:27.356 lat (usec): min=9506, max=57903, avg=15109.58, stdev=3189.70 00:35:27.356 clat percentiles (usec): 00:35:27.356 | 1.00th=[12256], 5.00th=[13173], 10.00th=[13566], 20.00th=[14091], 00:35:27.356 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:35:27.356 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16188], 95.00th=[16712], 00:35:27.356 | 99.00th=[17695], 99.50th=[52691], 99.90th=[56886], 99.95th=[57934], 00:35:27.357 | 99.99th=[57934] 00:35:27.357 bw ( KiB/s): min=22016, max=26368, per=32.23%, avg=25461.65, stdev=1063.73, samples=20 00:35:27.357 iops : min= 172, max= 206, avg=198.90, stdev= 8.32, samples=20 00:35:27.357 lat (msec) : 10=0.25%, 20=99.10%, 50=0.15%, 100=0.50% 00:35:27.357 cpu : usr=93.56%, sys=5.92%, ctx=56, majf=0, minf=111 00:35:27.357 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.357 issued rwts: total=1991,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.357 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:27.357 filename0: (groupid=0, jobs=1): err= 0: pid=2437086: Fri Jul 26 02:11:09 2024 00:35:27.357 read: IOPS=203, BW=25.4MiB/s (26.6MB/s)(255MiB/10044msec) 00:35:27.357 slat (nsec): min=4610, max=45702, avg=16263.89, stdev=5110.02 00:35:27.357 clat (usec): min=9177, max=55786, avg=14736.71, stdev=2218.03 00:35:27.357 lat (usec): min=9190, max=55800, avg=14752.98, stdev=2217.84 00:35:27.357 clat percentiles (usec): 00:35:27.357 | 1.00th=[11338], 5.00th=[12911], 10.00th=[13304], 20.00th=[13829], 00:35:27.357 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:35:27.357 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16450], 00:35:27.357 | 99.00th=[17433], 99.50th=[17695], 99.90th=[54789], 99.95th=[55313], 00:35:27.357 | 99.99th=[55837] 00:35:27.357 bw ( KiB/s): min=24320, max=27136, per=33.01%, avg=26073.60, stdev=634.04, samples=20 00:35:27.357 iops : min= 190, max= 212, avg=203.70, stdev= 4.95, samples=20 00:35:27.357 lat (msec) : 10=0.10%, 20=99.61%, 50=0.10%, 100=0.20% 00:35:27.357 cpu : usr=92.99%, sys=6.37%, ctx=21, majf=0, minf=193 00:35:27.357 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.357 issued rwts: total=2039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.357 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:27.357 00:35:27.357 Run status group 0 (all jobs): 00:35:27.357 READ: bw=77.1MiB/s (80.9MB/s), 24.8MiB/s-27.0MiB/s (26.0MB/s-28.3MB/s), io=775MiB (812MB), run=10044-10044msec 00:35:27.614 02:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:27.614 02:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:27.614 02:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:27.614 02:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:27.614 02:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:27.614 02:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:27.615 02:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.615 02:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:27.615 02:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.615 02:11:09 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:27.615 02:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.615 02:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:27.615 02:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.615 00:35:27.615 real 0m11.108s 00:35:27.615 user 0m29.278s 00:35:27.615 sys 0m2.037s 00:35:27.615 02:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:27.615 02:11:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:27.615 ************************************ 00:35:27.615 END TEST fio_dif_digest 00:35:27.615 ************************************ 00:35:27.615 02:11:09 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:27.615 02:11:09 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:27.615 02:11:09 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:27.615 02:11:09 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:35:27.615 02:11:09 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:27.615 02:11:09 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:35:27.615 02:11:09 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:27.615 02:11:09 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:27.615 rmmod nvme_tcp 00:35:27.615 rmmod nvme_fabrics 00:35:27.874 rmmod nvme_keyring 00:35:27.874 02:11:09 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:27.874 02:11:09 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:35:27.874 02:11:09 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:35:27.874 02:11:09 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2431078 ']' 00:35:27.874 02:11:09 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2431078 00:35:27.874 02:11:09 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2431078 ']' 00:35:27.874 02:11:09 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2431078 00:35:27.874 02:11:09 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:35:27.874 02:11:09 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:27.874 02:11:09 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2431078 00:35:27.874 02:11:09 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:27.874 02:11:09 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:27.874 02:11:09 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2431078' 00:35:27.874 killing process with pid 2431078 00:35:27.874 02:11:09 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2431078 00:35:27.874 02:11:09 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2431078 00:35:28.162 02:11:09 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:28.162 02:11:09 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:29.094 Waiting for block devices as requested 00:35:29.094 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:29.094 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:29.352 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:29.352 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:29.352 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:29.352 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:29.610 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:29.610 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:29.610 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:29.610 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:29.867 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:29.867 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:29.867 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:30.124 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:30.124 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:30.124 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:30.124 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:30.382 02:11:12 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:30.382 02:11:12 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:30.382 02:11:12 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:30.382 02:11:12 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:30.382 02:11:12 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.382 02:11:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:30.382 02:11:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.280 02:11:14 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:32.280 00:35:32.280 real 1m6.300s 00:35:32.280 user 6m26.575s 00:35:32.280 sys 0m19.013s 00:35:32.280 02:11:14 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:32.280 02:11:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:32.280 ************************************ 00:35:32.280 END TEST nvmf_dif 00:35:32.280 ************************************ 00:35:32.280 02:11:14 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:32.280 02:11:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:32.280 02:11:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:32.280 02:11:14 -- common/autotest_common.sh@10 -- # set +x 00:35:32.538 ************************************ 00:35:32.538 START TEST nvmf_abort_qd_sizes 00:35:32.538 ************************************ 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:32.538 * Looking for test storage... 00:35:32.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.538 02:11:14 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:35:32.539 02:11:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:34.440 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:34.440 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:34.440 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:34.440 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:34.440 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:34.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:34.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:35:34.441 00:35:34.441 --- 10.0.0.2 ping statistics --- 00:35:34.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.441 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:35:34.441 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:34.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:34.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:35:34.441 00:35:34.441 --- 10.0.0.1 ping statistics --- 00:35:34.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.441 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:35:34.441 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:34.441 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:35:34.441 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:34.441 02:11:16 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:35.375 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:35.375 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:35.375 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:35.375 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:35.375 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:35.633 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:35.633 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:35.633 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:35.633 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:35.633 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:35.634 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:35.634 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:35.634 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:35.634 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:35.634 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:35.634 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:36.570 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:36.570 02:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:36.570 02:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:36.570 02:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:36.571 02:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:36.571 02:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:36.571 02:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:36.571 02:11:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:36.571 02:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:36.571 02:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:36.571 02:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:36.571 02:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2441903 00:35:36.571 02:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:36.571 02:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2441903 00:35:36.571 02:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2441903 ']' 00:35:36.571 02:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:36.571 02:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:36.571 02:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:36.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:36.571 02:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:36.571 02:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:36.833 [2024-07-26 02:11:18.584547] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:35:36.833 [2024-07-26 02:11:18.584618] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:36.833 EAL: No free 2048 kB hugepages reported on node 1 00:35:36.833 [2024-07-26 02:11:18.651882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:36.833 [2024-07-26 02:11:18.743726] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:36.833 [2024-07-26 02:11:18.743790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:36.833 [2024-07-26 02:11:18.743817] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:36.833 [2024-07-26 02:11:18.743831] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:36.833 [2024-07-26 02:11:18.743844] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:36.833 [2024-07-26 02:11:18.743925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.833 [2024-07-26 02:11:18.743979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:36.833 [2024-07-26 02:11:18.744097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:36.833 [2024-07-26 02:11:18.744100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:37.089 02:11:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:37.089 ************************************ 00:35:37.089 START TEST spdk_target_abort 00:35:37.089 ************************************ 00:35:37.089 02:11:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:35:37.089 02:11:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:37.089 02:11:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:35:37.089 02:11:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.089 02:11:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:40.366 spdk_targetn1 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:40.366 [2024-07-26 02:11:21.748431] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:40.366 [2024-07-26 02:11:21.780661] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:40.366 02:11:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:40.366 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.641 Initializing NVMe Controllers 00:35:43.641 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:43.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:43.641 Initialization complete. Launching workers. 00:35:43.641 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11419, failed: 0 00:35:43.641 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1307, failed to submit 10112 00:35:43.641 success 822, unsuccess 485, failed 0 00:35:43.641 02:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:43.641 02:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:43.641 EAL: No free 2048 kB hugepages reported on node 1 00:35:46.913 Initializing NVMe Controllers 00:35:46.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:46.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:46.913 Initialization complete. Launching workers. 00:35:46.913 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8618, failed: 0 00:35:46.913 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1247, failed to submit 7371 00:35:46.913 success 362, unsuccess 885, failed 0 00:35:46.913 02:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:46.913 02:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:46.913 EAL: No free 2048 kB hugepages reported on node 1 00:35:50.191 Initializing NVMe Controllers 00:35:50.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:50.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:50.191 Initialization complete. Launching workers. 00:35:50.191 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31114, failed: 0 00:35:50.191 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2737, failed to submit 28377 00:35:50.191 success 525, unsuccess 2212, failed 0 00:35:50.191 02:11:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:50.191 02:11:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.191 02:11:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:50.191 02:11:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.191 02:11:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:50.191 02:11:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.191 02:11:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.171 02:11:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.171 02:11:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2441903 00:35:51.171 02:11:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2441903 ']' 00:35:51.171 02:11:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2441903 00:35:51.171 02:11:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:35:51.171 02:11:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:51.171 02:11:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2441903 00:35:51.171 02:11:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:51.171 02:11:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:51.171 02:11:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2441903' 00:35:51.171 killing process with pid 2441903 00:35:51.171 02:11:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2441903 00:35:51.171 02:11:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2441903 00:35:51.171 00:35:51.171 real 0m14.251s 00:35:51.171 user 0m53.966s 00:35:51.171 sys 0m2.583s 00:35:51.171 02:11:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:51.171 02:11:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.171 ************************************ 00:35:51.171 END TEST spdk_target_abort 00:35:51.171 ************************************ 00:35:51.430 02:11:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:51.430 02:11:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:51.430 02:11:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:51.430 02:11:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:51.430 ************************************ 00:35:51.430 START TEST kernel_target_abort 00:35:51.430 ************************************ 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:51.430 02:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:52.362 Waiting for block devices as requested 00:35:52.362 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:52.619 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:52.619 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:52.877 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:52.877 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:52.877 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:52.877 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:53.135 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:53.135 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:53.135 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:53.135 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:53.393 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:53.393 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:53.393 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:53.651 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:53.651 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:53.651 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:53.909 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:53.909 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:53.910 No valid GPT data, bailing 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:53.910 00:35:53.910 Discovery Log Number of Records 2, Generation counter 2 00:35:53.910 =====Discovery Log Entry 0====== 00:35:53.910 trtype: tcp 00:35:53.910 adrfam: ipv4 00:35:53.910 subtype: current discovery subsystem 00:35:53.910 treq: not specified, sq flow control disable supported 00:35:53.910 portid: 1 00:35:53.910 trsvcid: 4420 00:35:53.910 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:53.910 traddr: 10.0.0.1 00:35:53.910 eflags: none 00:35:53.910 sectype: none 00:35:53.910 =====Discovery Log Entry 1====== 00:35:53.910 trtype: tcp 00:35:53.910 adrfam: ipv4 00:35:53.910 subtype: nvme subsystem 00:35:53.910 treq: not specified, sq flow control disable supported 00:35:53.910 portid: 1 00:35:53.910 trsvcid: 4420 00:35:53.910 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:53.910 traddr: 10.0.0.1 00:35:53.910 eflags: none 00:35:53.910 sectype: none 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:53.910 02:11:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:53.910 EAL: No free 2048 kB hugepages reported on node 1 00:35:57.196 Initializing NVMe Controllers 00:35:57.196 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:57.196 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:57.196 Initialization complete. Launching workers. 00:35:57.196 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39361, failed: 0 00:35:57.196 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39361, failed to submit 0 00:35:57.196 success 0, unsuccess 39361, failed 0 00:35:57.196 02:11:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:57.196 02:11:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:57.196 EAL: No free 2048 kB hugepages reported on node 1 00:36:00.478 Initializing NVMe Controllers 00:36:00.478 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:00.478 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:00.478 Initialization complete. Launching workers. 00:36:00.478 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70724, failed: 0 00:36:00.478 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17830, failed to submit 52894 00:36:00.478 success 0, unsuccess 17830, failed 0 00:36:00.478 02:11:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:00.478 02:11:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:00.478 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.763 Initializing NVMe Controllers 00:36:03.763 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:03.763 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:03.763 Initialization complete. Launching workers. 00:36:03.763 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71783, failed: 0 00:36:03.763 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17938, failed to submit 53845 00:36:03.763 success 0, unsuccess 17938, failed 0 00:36:03.763 02:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:03.763 02:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:03.763 02:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:03.763 02:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:03.763 02:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:03.763 02:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:03.763 02:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:03.763 02:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:03.763 02:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:03.763 02:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:04.329 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:04.329 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:04.588 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:04.588 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:04.588 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:04.588 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:04.588 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:04.588 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:04.588 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:04.588 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:04.588 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:04.588 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:04.588 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:04.588 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:04.588 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:04.588 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:05.527 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:05.785 00:36:05.786 real 0m14.346s 00:36:05.786 user 0m5.682s 00:36:05.786 sys 0m3.321s 00:36:05.786 02:11:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:05.786 02:11:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.786 ************************************ 00:36:05.786 END TEST kernel_target_abort 00:36:05.786 ************************************ 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:05.786 rmmod nvme_tcp 00:36:05.786 rmmod nvme_fabrics 00:36:05.786 rmmod nvme_keyring 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2441903 ']' 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2441903 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2441903 ']' 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2441903 00:36:05.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2441903) - No such process 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2441903 is not found' 00:36:05.786 Process with pid 2441903 is not found 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:05.786 02:11:47 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:06.721 Waiting for block devices as requested 00:36:06.982 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:06.982 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:07.239 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:07.239 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:07.239 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:07.239 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:07.496 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:07.496 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:07.496 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:07.497 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:07.497 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:07.755 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:07.755 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:07.755 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:07.755 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:08.013 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:08.013 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:08.013 02:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:08.013 02:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:08.013 02:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:08.013 02:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:08.013 02:11:49 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.013 02:11:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:08.013 02:11:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.546 02:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:10.546 00:36:10.546 real 0m37.698s 00:36:10.546 user 1m1.629s 00:36:10.546 sys 0m9.112s 00:36:10.546 02:11:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:10.546 02:11:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:10.546 ************************************ 00:36:10.546 END TEST nvmf_abort_qd_sizes 00:36:10.546 ************************************ 00:36:10.546 02:11:52 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:10.546 02:11:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:10.546 02:11:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:10.546 02:11:52 -- common/autotest_common.sh@10 -- # set +x 00:36:10.546 ************************************ 00:36:10.546 START TEST keyring_file 00:36:10.546 ************************************ 00:36:10.546 02:11:52 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:10.546 * Looking for test storage... 00:36:10.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:10.546 02:11:52 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:10.546 02:11:52 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:10.546 02:11:52 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:10.546 02:11:52 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:10.546 02:11:52 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:10.546 02:11:52 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:10.546 02:11:52 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:10.546 02:11:52 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:10.546 02:11:52 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:10.546 02:11:52 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:10.546 02:11:52 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:10.546 02:11:52 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:10.546 02:11:52 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:10.546 02:11:52 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:10.547 02:11:52 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:10.547 02:11:52 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:10.547 02:11:52 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:10.547 02:11:52 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.547 02:11:52 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.547 02:11:52 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.547 02:11:52 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:10.547 02:11:52 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:10.547 02:11:52 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:10.547 02:11:52 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:10.547 02:11:52 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:10.547 02:11:52 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:10.547 02:11:52 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:10.547 02:11:52 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JlMpIKrTYe 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JlMpIKrTYe 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JlMpIKrTYe 00:36:10.547 02:11:52 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.JlMpIKrTYe 00:36:10.547 02:11:52 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0m8k6n3hU0 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:10.547 02:11:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0m8k6n3hU0 00:36:10.547 02:11:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0m8k6n3hU0 00:36:10.547 02:11:52 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.0m8k6n3hU0 00:36:10.547 02:11:52 keyring_file -- keyring/file.sh@30 -- # tgtpid=2447649 00:36:10.547 02:11:52 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:10.547 02:11:52 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2447649 00:36:10.547 02:11:52 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2447649 ']' 00:36:10.547 02:11:52 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:10.547 02:11:52 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:10.547 02:11:52 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:10.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:10.547 02:11:52 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:10.547 02:11:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:10.547 [2024-07-26 02:11:52.237661] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:36:10.547 [2024-07-26 02:11:52.237760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2447649 ] 00:36:10.547 EAL: No free 2048 kB hugepages reported on node 1 00:36:10.547 [2024-07-26 02:11:52.301429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:10.547 [2024-07-26 02:11:52.392893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:10.806 02:11:52 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:10.806 02:11:52 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:10.806 02:11:52 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:10.806 02:11:52 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.806 02:11:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:10.806 [2024-07-26 02:11:52.637520] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:10.806 null0 00:36:10.806 [2024-07-26 02:11:52.669583] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:10.806 [2024-07-26 02:11:52.670088] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:10.806 [2024-07-26 02:11:52.677586] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:10.806 02:11:52 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.806 02:11:52 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:10.806 02:11:52 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:10.806 02:11:52 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:10.806 02:11:52 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:10.806 02:11:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:10.806 02:11:52 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:10.806 02:11:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:10.806 02:11:52 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:10.806 02:11:52 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.807 02:11:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:10.807 [2024-07-26 02:11:52.689611] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:10.807 request: 00:36:10.807 { 00:36:10.807 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:10.807 "secure_channel": false, 00:36:10.807 "listen_address": { 00:36:10.807 "trtype": "tcp", 00:36:10.807 "traddr": "127.0.0.1", 00:36:10.807 "trsvcid": "4420" 00:36:10.807 }, 00:36:10.807 "method": "nvmf_subsystem_add_listener", 00:36:10.807 "req_id": 1 00:36:10.807 } 00:36:10.807 Got JSON-RPC error response 00:36:10.807 response: 00:36:10.807 { 00:36:10.807 "code": -32602, 00:36:10.807 "message": "Invalid parameters" 00:36:10.807 } 00:36:10.807 02:11:52 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:10.807 02:11:52 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:10.807 02:11:52 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:10.807 02:11:52 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:10.807 02:11:52 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:10.807 02:11:52 keyring_file -- keyring/file.sh@46 -- # bperfpid=2447667 00:36:10.807 02:11:52 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:10.807 02:11:52 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2447667 /var/tmp/bperf.sock 00:36:10.807 02:11:52 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2447667 ']' 00:36:10.807 02:11:52 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:10.807 02:11:52 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:10.807 02:11:52 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:10.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:10.807 02:11:52 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:10.807 02:11:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:10.807 [2024-07-26 02:11:52.736958] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:36:10.807 [2024-07-26 02:11:52.737032] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2447667 ] 00:36:10.807 EAL: No free 2048 kB hugepages reported on node 1 00:36:10.807 [2024-07-26 02:11:52.794536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.065 [2024-07-26 02:11:52.886536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:11.065 02:11:52 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:11.065 02:11:52 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:11.065 02:11:52 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JlMpIKrTYe 00:36:11.065 02:11:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JlMpIKrTYe 00:36:11.322 02:11:53 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0m8k6n3hU0 00:36:11.322 02:11:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0m8k6n3hU0 00:36:11.579 02:11:53 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:11.579 02:11:53 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:11.579 02:11:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:11.579 02:11:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:11.579 02:11:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:11.869 02:11:53 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.JlMpIKrTYe == \/\t\m\p\/\t\m\p\.\J\l\M\p\I\K\r\T\Y\e ]] 00:36:11.869 02:11:53 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:11.869 02:11:53 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:11.869 02:11:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:11.869 02:11:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:11.869 02:11:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:12.127 02:11:54 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.0m8k6n3hU0 == \/\t\m\p\/\t\m\p\.\0\m\8\k\6\n\3\h\U\0 ]] 00:36:12.127 02:11:54 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:12.127 02:11:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:12.127 02:11:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:12.127 02:11:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:12.127 02:11:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:12.127 02:11:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:12.384 02:11:54 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:12.384 02:11:54 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:12.384 02:11:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:12.384 02:11:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:12.384 02:11:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:12.384 02:11:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:12.384 02:11:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:12.639 02:11:54 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:12.639 02:11:54 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:12.639 02:11:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:12.896 [2024-07-26 02:11:54.740027] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:12.896 nvme0n1 00:36:12.896 02:11:54 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:12.896 02:11:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:12.896 02:11:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:12.896 02:11:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:12.896 02:11:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:12.896 02:11:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:13.153 02:11:55 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:13.153 02:11:55 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:13.153 02:11:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:13.153 02:11:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:13.153 02:11:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:13.153 02:11:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:13.153 02:11:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:13.410 02:11:55 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:13.410 02:11:55 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:13.410 Running I/O for 1 seconds... 00:36:14.785 00:36:14.785 Latency(us) 00:36:14.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.785 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:14.785 nvme0n1 : 1.01 7267.88 28.39 0.00 0.00 17506.39 9320.68 31845.64 00:36:14.785 =================================================================================================================== 00:36:14.785 Total : 7267.88 28.39 0.00 0.00 17506.39 9320.68 31845.64 00:36:14.785 0 00:36:14.785 02:11:56 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:14.785 02:11:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:14.785 02:11:56 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:14.785 02:11:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:14.785 02:11:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:14.785 02:11:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:14.785 02:11:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:14.785 02:11:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:15.043 02:11:56 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:15.043 02:11:56 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:15.043 02:11:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:15.043 02:11:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:15.043 02:11:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:15.043 02:11:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:15.043 02:11:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:15.301 02:11:57 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:15.301 02:11:57 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:15.301 02:11:57 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:15.301 02:11:57 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:15.301 02:11:57 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:15.301 02:11:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:15.301 02:11:57 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:15.301 02:11:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:15.301 02:11:57 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:15.301 02:11:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:15.557 [2024-07-26 02:11:57.455497] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:15.557 [2024-07-26 02:11:57.456021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc8f0 (107): Transport endpoint is not connected 00:36:15.557 [2024-07-26 02:11:57.457006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc8f0 (9): Bad file descriptor 00:36:15.557 [2024-07-26 02:11:57.458005] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:15.557 [2024-07-26 02:11:57.458029] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:15.557 [2024-07-26 02:11:57.458045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:15.557 request: 00:36:15.557 { 00:36:15.557 "name": "nvme0", 00:36:15.557 "trtype": "tcp", 00:36:15.557 "traddr": "127.0.0.1", 00:36:15.557 "adrfam": "ipv4", 00:36:15.557 "trsvcid": "4420", 00:36:15.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:15.557 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:15.557 "prchk_reftag": false, 00:36:15.557 "prchk_guard": false, 00:36:15.557 "hdgst": false, 00:36:15.557 "ddgst": false, 00:36:15.557 "psk": "key1", 00:36:15.557 "method": "bdev_nvme_attach_controller", 00:36:15.557 "req_id": 1 00:36:15.557 } 00:36:15.557 Got JSON-RPC error response 00:36:15.557 response: 00:36:15.557 { 00:36:15.557 "code": -5, 00:36:15.557 "message": "Input/output error" 00:36:15.557 } 00:36:15.557 02:11:57 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:15.557 02:11:57 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:15.557 02:11:57 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:15.557 02:11:57 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:15.557 02:11:57 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:15.557 02:11:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:15.557 02:11:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:15.557 02:11:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:15.557 02:11:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:15.557 02:11:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:15.814 02:11:57 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:15.814 02:11:57 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:15.814 02:11:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:15.814 02:11:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:15.814 02:11:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:15.814 02:11:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:15.814 02:11:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:16.071 02:11:57 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:16.071 02:11:57 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:16.072 02:11:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:16.329 02:11:58 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:16.329 02:11:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:16.586 02:11:58 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:16.586 02:11:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:16.586 02:11:58 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:16.844 02:11:58 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:16.844 02:11:58 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.JlMpIKrTYe 00:36:16.844 02:11:58 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.JlMpIKrTYe 00:36:16.844 02:11:58 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:16.844 02:11:58 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.JlMpIKrTYe 00:36:16.844 02:11:58 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:16.844 02:11:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:16.844 02:11:58 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:16.844 02:11:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:16.844 02:11:58 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JlMpIKrTYe 00:36:16.844 02:11:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JlMpIKrTYe 00:36:17.102 [2024-07-26 02:11:58.971951] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JlMpIKrTYe': 0100660 00:36:17.102 [2024-07-26 02:11:58.971996] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:17.102 request: 00:36:17.102 { 00:36:17.102 "name": "key0", 00:36:17.102 "path": "/tmp/tmp.JlMpIKrTYe", 00:36:17.102 "method": "keyring_file_add_key", 00:36:17.102 "req_id": 1 00:36:17.102 } 00:36:17.102 Got JSON-RPC error response 00:36:17.102 response: 00:36:17.102 { 00:36:17.102 "code": -1, 00:36:17.102 "message": "Operation not permitted" 00:36:17.102 } 00:36:17.102 02:11:58 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:17.102 02:11:58 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:17.102 02:11:58 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:17.102 02:11:58 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:17.102 02:11:58 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.JlMpIKrTYe 00:36:17.102 02:11:58 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JlMpIKrTYe 00:36:17.102 02:11:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JlMpIKrTYe 00:36:17.360 02:11:59 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.JlMpIKrTYe 00:36:17.360 02:11:59 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:17.360 02:11:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:17.360 02:11:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:17.360 02:11:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:17.360 02:11:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:17.360 02:11:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.617 02:11:59 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:17.617 02:11:59 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:17.617 02:11:59 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:17.617 02:11:59 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:17.617 02:11:59 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:17.617 02:11:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:17.617 02:11:59 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:17.618 02:11:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:17.618 02:11:59 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:17.618 02:11:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:17.875 [2024-07-26 02:11:59.721996] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.JlMpIKrTYe': No such file or directory 00:36:17.875 [2024-07-26 02:11:59.722034] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:17.875 [2024-07-26 02:11:59.722087] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:17.875 [2024-07-26 02:11:59.722124] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:17.875 [2024-07-26 02:11:59.722136] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:17.875 request: 00:36:17.875 { 00:36:17.875 "name": "nvme0", 00:36:17.875 "trtype": "tcp", 00:36:17.875 "traddr": "127.0.0.1", 00:36:17.875 "adrfam": "ipv4", 00:36:17.875 "trsvcid": "4420", 00:36:17.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:17.875 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:17.875 "prchk_reftag": false, 00:36:17.875 "prchk_guard": false, 00:36:17.875 "hdgst": false, 00:36:17.875 "ddgst": false, 00:36:17.875 "psk": "key0", 00:36:17.875 "method": "bdev_nvme_attach_controller", 00:36:17.875 "req_id": 1 00:36:17.875 } 00:36:17.875 Got JSON-RPC error response 00:36:17.875 response: 00:36:17.875 { 00:36:17.875 "code": -19, 00:36:17.875 "message": "No such device" 00:36:17.875 } 00:36:17.875 02:11:59 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:17.875 02:11:59 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:17.875 02:11:59 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:17.875 02:11:59 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:17.875 02:11:59 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:17.875 02:11:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:18.133 02:11:59 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:18.133 02:11:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:18.133 02:11:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:18.133 02:11:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:18.133 02:11:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:18.133 02:11:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:18.133 02:11:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.iLeJhkwNEJ 00:36:18.133 02:11:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:18.133 02:11:59 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:18.133 02:11:59 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:18.133 02:11:59 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:18.133 02:11:59 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:18.133 02:11:59 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:18.133 02:11:59 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:18.133 02:12:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.iLeJhkwNEJ 00:36:18.133 02:12:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.iLeJhkwNEJ 00:36:18.133 02:12:00 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.iLeJhkwNEJ 00:36:18.133 02:12:00 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iLeJhkwNEJ 00:36:18.133 02:12:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iLeJhkwNEJ 00:36:18.391 02:12:00 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:18.391 02:12:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:18.648 nvme0n1 00:36:18.648 02:12:00 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:18.648 02:12:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:18.648 02:12:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:18.648 02:12:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:18.648 02:12:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:18.648 02:12:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:18.906 02:12:00 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:18.906 02:12:00 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:18.906 02:12:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:19.164 02:12:01 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:19.164 02:12:01 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:19.164 02:12:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.164 02:12:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.164 02:12:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:19.422 02:12:01 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:19.422 02:12:01 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:19.422 02:12:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:19.422 02:12:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:19.422 02:12:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.422 02:12:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.422 02:12:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:19.680 02:12:01 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:19.680 02:12:01 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:19.680 02:12:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:20.245 02:12:01 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:20.245 02:12:01 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:20.245 02:12:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.245 02:12:02 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:20.245 02:12:02 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iLeJhkwNEJ 00:36:20.245 02:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iLeJhkwNEJ 00:36:20.503 02:12:02 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0m8k6n3hU0 00:36:20.503 02:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0m8k6n3hU0 00:36:20.760 02:12:02 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:20.760 02:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:21.018 nvme0n1 00:36:21.276 02:12:03 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:21.276 02:12:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:21.544 02:12:03 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:21.544 "subsystems": [ 00:36:21.544 { 00:36:21.544 "subsystem": "keyring", 00:36:21.544 "config": [ 00:36:21.544 { 00:36:21.544 "method": "keyring_file_add_key", 00:36:21.544 "params": { 00:36:21.544 "name": "key0", 00:36:21.544 "path": "/tmp/tmp.iLeJhkwNEJ" 00:36:21.544 } 00:36:21.544 }, 00:36:21.544 { 00:36:21.544 "method": "keyring_file_add_key", 00:36:21.544 "params": { 00:36:21.544 "name": "key1", 00:36:21.544 "path": "/tmp/tmp.0m8k6n3hU0" 00:36:21.544 } 00:36:21.544 } 00:36:21.544 ] 00:36:21.544 }, 00:36:21.544 { 00:36:21.544 "subsystem": "iobuf", 00:36:21.544 "config": [ 00:36:21.544 { 00:36:21.544 "method": "iobuf_set_options", 00:36:21.544 "params": { 00:36:21.544 "small_pool_count": 8192, 00:36:21.544 "large_pool_count": 1024, 00:36:21.544 "small_bufsize": 8192, 00:36:21.544 "large_bufsize": 135168 00:36:21.544 } 00:36:21.544 } 00:36:21.544 ] 00:36:21.544 }, 00:36:21.544 { 00:36:21.544 "subsystem": "sock", 00:36:21.544 "config": [ 00:36:21.544 { 00:36:21.544 "method": "sock_set_default_impl", 00:36:21.544 "params": { 00:36:21.544 "impl_name": "posix" 00:36:21.544 } 00:36:21.544 }, 00:36:21.544 { 00:36:21.544 "method": "sock_impl_set_options", 00:36:21.544 "params": { 00:36:21.544 "impl_name": "ssl", 00:36:21.544 "recv_buf_size": 4096, 00:36:21.544 "send_buf_size": 4096, 00:36:21.544 "enable_recv_pipe": true, 00:36:21.544 "enable_quickack": false, 00:36:21.544 "enable_placement_id": 0, 00:36:21.544 "enable_zerocopy_send_server": true, 00:36:21.544 "enable_zerocopy_send_client": false, 00:36:21.544 "zerocopy_threshold": 0, 00:36:21.544 "tls_version": 0, 00:36:21.544 "enable_ktls": false 00:36:21.544 } 00:36:21.544 }, 00:36:21.544 { 00:36:21.544 "method": "sock_impl_set_options", 00:36:21.544 "params": { 00:36:21.544 "impl_name": "posix", 00:36:21.544 "recv_buf_size": 2097152, 00:36:21.544 "send_buf_size": 2097152, 00:36:21.544 "enable_recv_pipe": true, 00:36:21.544 "enable_quickack": false, 00:36:21.544 "enable_placement_id": 0, 00:36:21.544 "enable_zerocopy_send_server": true, 00:36:21.544 "enable_zerocopy_send_client": false, 00:36:21.544 "zerocopy_threshold": 0, 00:36:21.544 "tls_version": 0, 00:36:21.544 "enable_ktls": false 00:36:21.544 } 00:36:21.544 } 00:36:21.544 ] 00:36:21.544 }, 00:36:21.544 { 00:36:21.544 "subsystem": "vmd", 00:36:21.544 "config": [] 00:36:21.544 }, 00:36:21.544 { 00:36:21.544 "subsystem": "accel", 00:36:21.544 "config": [ 00:36:21.544 { 00:36:21.544 "method": "accel_set_options", 00:36:21.544 "params": { 00:36:21.544 "small_cache_size": 128, 00:36:21.544 "large_cache_size": 16, 00:36:21.544 "task_count": 2048, 00:36:21.544 "sequence_count": 2048, 00:36:21.544 "buf_count": 2048 00:36:21.544 } 00:36:21.544 } 00:36:21.544 ] 00:36:21.544 }, 00:36:21.544 { 00:36:21.544 "subsystem": "bdev", 00:36:21.544 "config": [ 00:36:21.544 { 00:36:21.544 "method": "bdev_set_options", 00:36:21.544 "params": { 00:36:21.544 "bdev_io_pool_size": 65535, 00:36:21.544 "bdev_io_cache_size": 256, 00:36:21.544 "bdev_auto_examine": true, 00:36:21.544 "iobuf_small_cache_size": 128, 00:36:21.544 "iobuf_large_cache_size": 16 00:36:21.544 } 00:36:21.544 }, 00:36:21.544 { 00:36:21.544 "method": "bdev_raid_set_options", 00:36:21.544 "params": { 00:36:21.544 "process_window_size_kb": 1024, 00:36:21.544 "process_max_bandwidth_mb_sec": 0 00:36:21.544 } 00:36:21.544 }, 00:36:21.544 { 00:36:21.544 "method": "bdev_iscsi_set_options", 00:36:21.544 "params": { 00:36:21.544 "timeout_sec": 30 00:36:21.544 } 00:36:21.544 }, 00:36:21.544 { 00:36:21.544 "method": "bdev_nvme_set_options", 00:36:21.544 "params": { 00:36:21.544 "action_on_timeout": "none", 00:36:21.544 "timeout_us": 0, 00:36:21.544 "timeout_admin_us": 0, 00:36:21.544 "keep_alive_timeout_ms": 10000, 00:36:21.544 "arbitration_burst": 0, 00:36:21.544 "low_priority_weight": 0, 00:36:21.544 "medium_priority_weight": 0, 00:36:21.544 "high_priority_weight": 0, 00:36:21.544 "nvme_adminq_poll_period_us": 10000, 00:36:21.544 "nvme_ioq_poll_period_us": 0, 00:36:21.544 "io_queue_requests": 512, 00:36:21.544 "delay_cmd_submit": true, 00:36:21.544 "transport_retry_count": 4, 00:36:21.544 "bdev_retry_count": 3, 00:36:21.544 "transport_ack_timeout": 0, 00:36:21.544 "ctrlr_loss_timeout_sec": 0, 00:36:21.544 "reconnect_delay_sec": 0, 00:36:21.544 "fast_io_fail_timeout_sec": 0, 00:36:21.544 "disable_auto_failback": false, 00:36:21.544 "generate_uuids": false, 00:36:21.544 "transport_tos": 0, 00:36:21.544 "nvme_error_stat": false, 00:36:21.544 "rdma_srq_size": 0, 00:36:21.544 "io_path_stat": false, 00:36:21.544 "allow_accel_sequence": false, 00:36:21.544 "rdma_max_cq_size": 0, 00:36:21.544 "rdma_cm_event_timeout_ms": 0, 00:36:21.544 "dhchap_digests": [ 00:36:21.544 "sha256", 00:36:21.544 "sha384", 00:36:21.544 "sha512" 00:36:21.544 ], 00:36:21.544 "dhchap_dhgroups": [ 00:36:21.544 "null", 00:36:21.544 "ffdhe2048", 00:36:21.544 "ffdhe3072", 00:36:21.544 "ffdhe4096", 00:36:21.544 "ffdhe6144", 00:36:21.544 "ffdhe8192" 00:36:21.544 ] 00:36:21.544 } 00:36:21.544 }, 00:36:21.544 { 00:36:21.544 "method": "bdev_nvme_attach_controller", 00:36:21.544 "params": { 00:36:21.544 "name": "nvme0", 00:36:21.544 "trtype": "TCP", 00:36:21.544 "adrfam": "IPv4", 00:36:21.544 "traddr": "127.0.0.1", 00:36:21.544 "trsvcid": "4420", 00:36:21.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:21.544 "prchk_reftag": false, 00:36:21.544 "prchk_guard": false, 00:36:21.544 "ctrlr_loss_timeout_sec": 0, 00:36:21.544 "reconnect_delay_sec": 0, 00:36:21.544 "fast_io_fail_timeout_sec": 0, 00:36:21.544 "psk": "key0", 00:36:21.544 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:21.544 "hdgst": false, 00:36:21.544 "ddgst": false 00:36:21.544 } 00:36:21.544 }, 00:36:21.544 { 00:36:21.544 "method": "bdev_nvme_set_hotplug", 00:36:21.544 "params": { 00:36:21.544 "period_us": 100000, 00:36:21.544 "enable": false 00:36:21.544 } 00:36:21.544 }, 00:36:21.544 { 00:36:21.544 "method": "bdev_wait_for_examine" 00:36:21.544 } 00:36:21.544 ] 00:36:21.544 }, 00:36:21.544 { 00:36:21.544 "subsystem": "nbd", 00:36:21.544 "config": [] 00:36:21.544 } 00:36:21.544 ] 00:36:21.544 }' 00:36:21.544 02:12:03 keyring_file -- keyring/file.sh@114 -- # killprocess 2447667 00:36:21.544 02:12:03 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2447667 ']' 00:36:21.544 02:12:03 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2447667 00:36:21.544 02:12:03 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:21.544 02:12:03 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:21.544 02:12:03 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2447667 00:36:21.545 02:12:03 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:21.545 02:12:03 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:21.545 02:12:03 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2447667' 00:36:21.545 killing process with pid 2447667 00:36:21.545 02:12:03 keyring_file -- common/autotest_common.sh@969 -- # kill 2447667 00:36:21.545 Received shutdown signal, test time was about 1.000000 seconds 00:36:21.545 00:36:21.545 Latency(us) 00:36:21.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.545 =================================================================================================================== 00:36:21.545 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:21.545 02:12:03 keyring_file -- common/autotest_common.sh@974 -- # wait 2447667 00:36:21.803 02:12:03 keyring_file -- keyring/file.sh@117 -- # bperfpid=2449228 00:36:21.803 02:12:03 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2449228 /var/tmp/bperf.sock 00:36:21.803 02:12:03 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2449228 ']' 00:36:21.803 02:12:03 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:21.803 02:12:03 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:21.803 02:12:03 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:21.803 02:12:03 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:21.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:21.803 02:12:03 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:21.803 "subsystems": [ 00:36:21.803 { 00:36:21.803 "subsystem": "keyring", 00:36:21.803 "config": [ 00:36:21.803 { 00:36:21.803 "method": "keyring_file_add_key", 00:36:21.803 "params": { 00:36:21.803 "name": "key0", 00:36:21.803 "path": "/tmp/tmp.iLeJhkwNEJ" 00:36:21.803 } 00:36:21.803 }, 00:36:21.803 { 00:36:21.803 "method": "keyring_file_add_key", 00:36:21.803 "params": { 00:36:21.803 "name": "key1", 00:36:21.803 "path": "/tmp/tmp.0m8k6n3hU0" 00:36:21.803 } 00:36:21.803 } 00:36:21.803 ] 00:36:21.803 }, 00:36:21.803 { 00:36:21.803 "subsystem": "iobuf", 00:36:21.803 "config": [ 00:36:21.803 { 00:36:21.803 "method": "iobuf_set_options", 00:36:21.803 "params": { 00:36:21.803 "small_pool_count": 8192, 00:36:21.803 "large_pool_count": 1024, 00:36:21.803 "small_bufsize": 8192, 00:36:21.803 "large_bufsize": 135168 00:36:21.803 } 00:36:21.803 } 00:36:21.803 ] 00:36:21.803 }, 00:36:21.803 { 00:36:21.803 "subsystem": "sock", 00:36:21.803 "config": [ 00:36:21.803 { 00:36:21.803 "method": "sock_set_default_impl", 00:36:21.803 "params": { 00:36:21.803 "impl_name": "posix" 00:36:21.803 } 00:36:21.803 }, 00:36:21.803 { 00:36:21.803 "method": "sock_impl_set_options", 00:36:21.803 "params": { 00:36:21.803 "impl_name": "ssl", 00:36:21.803 "recv_buf_size": 4096, 00:36:21.803 "send_buf_size": 4096, 00:36:21.803 "enable_recv_pipe": true, 00:36:21.803 "enable_quickack": false, 00:36:21.803 "enable_placement_id": 0, 00:36:21.803 "enable_zerocopy_send_server": true, 00:36:21.803 "enable_zerocopy_send_client": false, 00:36:21.803 "zerocopy_threshold": 0, 00:36:21.803 "tls_version": 0, 00:36:21.803 "enable_ktls": false 00:36:21.803 } 00:36:21.803 }, 00:36:21.803 { 00:36:21.803 "method": "sock_impl_set_options", 00:36:21.803 "params": { 00:36:21.803 "impl_name": "posix", 00:36:21.803 "recv_buf_size": 2097152, 00:36:21.803 "send_buf_size": 2097152, 00:36:21.803 "enable_recv_pipe": true, 00:36:21.803 "enable_quickack": false, 00:36:21.803 "enable_placement_id": 0, 00:36:21.803 "enable_zerocopy_send_server": true, 00:36:21.803 "enable_zerocopy_send_client": false, 00:36:21.803 "zerocopy_threshold": 0, 00:36:21.803 "tls_version": 0, 00:36:21.803 "enable_ktls": false 00:36:21.803 } 00:36:21.803 } 00:36:21.803 ] 00:36:21.803 }, 00:36:21.803 { 00:36:21.803 "subsystem": "vmd", 00:36:21.803 "config": [] 00:36:21.803 }, 00:36:21.803 { 00:36:21.803 "subsystem": "accel", 00:36:21.803 "config": [ 00:36:21.803 { 00:36:21.803 "method": "accel_set_options", 00:36:21.803 "params": { 00:36:21.803 "small_cache_size": 128, 00:36:21.803 "large_cache_size": 16, 00:36:21.803 "task_count": 2048, 00:36:21.803 "sequence_count": 2048, 00:36:21.803 "buf_count": 2048 00:36:21.803 } 00:36:21.803 } 00:36:21.803 ] 00:36:21.803 }, 00:36:21.803 { 00:36:21.803 "subsystem": "bdev", 00:36:21.803 "config": [ 00:36:21.803 { 00:36:21.803 "method": "bdev_set_options", 00:36:21.803 "params": { 00:36:21.803 "bdev_io_pool_size": 65535, 00:36:21.803 "bdev_io_cache_size": 256, 00:36:21.803 "bdev_auto_examine": true, 00:36:21.803 "iobuf_small_cache_size": 128, 00:36:21.803 "iobuf_large_cache_size": 16 00:36:21.803 } 00:36:21.803 }, 00:36:21.803 { 00:36:21.803 "method": "bdev_raid_set_options", 00:36:21.803 "params": { 00:36:21.803 "process_window_size_kb": 1024, 00:36:21.803 "process_max_bandwidth_mb_sec": 0 00:36:21.803 } 00:36:21.803 }, 00:36:21.803 { 00:36:21.803 "method": "bdev_iscsi_set_options", 00:36:21.803 "params": { 00:36:21.803 "timeout_sec": 30 00:36:21.803 } 00:36:21.803 }, 00:36:21.803 { 00:36:21.803 "method": "bdev_nvme_set_options", 00:36:21.803 "params": { 00:36:21.803 "action_on_timeout": "none", 00:36:21.803 "timeout_us": 0, 00:36:21.803 "timeout_admin_us": 0, 00:36:21.803 "keep_alive_timeout_ms": 10000, 00:36:21.803 "arbitration_burst": 0, 00:36:21.803 "low_priority_weight": 0, 00:36:21.803 "medium_priority_weight": 0, 00:36:21.803 "high_priority_weight": 0, 00:36:21.803 "nvme_adminq_poll_period_us": 10000, 00:36:21.803 "nvme_ioq_poll_period_us": 0, 00:36:21.803 "io_queue_requests": 512, 00:36:21.803 "delay_cmd_submit": true, 00:36:21.803 "transport_retry_count": 4, 00:36:21.803 "bdev_retry_count": 3, 00:36:21.803 "transport_ack_timeout": 0, 00:36:21.803 "ctrlr_loss_timeout_sec": 0, 00:36:21.803 "reconnect_delay_sec": 0, 00:36:21.803 "fast_io_fail_timeout_sec": 0, 00:36:21.803 "disable_auto_failback": false, 00:36:21.803 "generate_uuids": false, 00:36:21.803 "transport_tos": 0, 00:36:21.803 "nvme_error_stat": false, 00:36:21.803 "rdma_srq_size": 0, 00:36:21.803 "io_path_stat": false, 00:36:21.803 "allow_accel_sequence": false, 00:36:21.803 "rdma_max_cq_size": 0, 00:36:21.803 "rdma_cm_event_timeout_ms": 0, 00:36:21.803 "dhchap_digests": [ 00:36:21.803 "sha256", 00:36:21.803 "sha384", 00:36:21.803 "sha512" 00:36:21.803 ], 00:36:21.803 "dhchap_dhgroups": [ 00:36:21.803 "null", 00:36:21.803 "ffdhe2048", 00:36:21.803 "ffdhe3072", 00:36:21.803 "ffdhe4096", 00:36:21.803 "ffdhe6144", 00:36:21.803 "ffdhe8192" 00:36:21.803 ] 00:36:21.803 } 00:36:21.803 }, 00:36:21.803 { 00:36:21.803 "method": "bdev_nvme_attach_controller", 00:36:21.803 "params": { 00:36:21.803 "name": "nvme0", 00:36:21.803 "trtype": "TCP", 00:36:21.803 "adrfam": "IPv4", 00:36:21.803 "traddr": "127.0.0.1", 00:36:21.803 "trsvcid": "4420", 00:36:21.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:21.804 "prchk_reftag": false, 00:36:21.804 "prchk_guard": false, 00:36:21.804 "ctrlr_loss_timeout_sec": 0, 00:36:21.804 "reconnect_delay_sec": 0, 00:36:21.804 "fast_io_fail_timeout_sec": 0, 00:36:21.804 "psk": "key0", 00:36:21.804 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:21.804 "hdgst": false, 00:36:21.804 "ddgst": false 00:36:21.804 } 00:36:21.804 }, 00:36:21.804 { 00:36:21.804 "method": "bdev_nvme_set_hotplug", 00:36:21.804 "params": { 00:36:21.804 "period_us": 100000, 00:36:21.804 "enable": false 00:36:21.804 } 00:36:21.804 }, 00:36:21.804 { 00:36:21.804 "method": "bdev_wait_for_examine" 00:36:21.804 } 00:36:21.804 ] 00:36:21.804 }, 00:36:21.804 { 00:36:21.804 "subsystem": "nbd", 00:36:21.804 "config": [] 00:36:21.804 } 00:36:21.804 ] 00:36:21.804 }' 00:36:21.804 02:12:03 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:21.804 02:12:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:21.804 [2024-07-26 02:12:03.639190] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:36:21.804 [2024-07-26 02:12:03.639282] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2449228 ] 00:36:21.804 EAL: No free 2048 kB hugepages reported on node 1 00:36:21.804 [2024-07-26 02:12:03.702034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:21.804 [2024-07-26 02:12:03.793876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:22.063 [2024-07-26 02:12:03.983439] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:22.630 02:12:04 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:22.630 02:12:04 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:22.630 02:12:04 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:22.630 02:12:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.630 02:12:04 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:22.887 02:12:04 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:22.887 02:12:04 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:22.887 02:12:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:22.887 02:12:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.887 02:12:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.887 02:12:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.887 02:12:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:23.144 02:12:05 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:23.144 02:12:05 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:23.144 02:12:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:23.144 02:12:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:23.144 02:12:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:23.144 02:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.144 02:12:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:23.401 02:12:05 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:23.401 02:12:05 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:23.401 02:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:23.401 02:12:05 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:23.658 02:12:05 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:23.658 02:12:05 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:23.658 02:12:05 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.iLeJhkwNEJ /tmp/tmp.0m8k6n3hU0 00:36:23.658 02:12:05 keyring_file -- keyring/file.sh@20 -- # killprocess 2449228 00:36:23.658 02:12:05 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2449228 ']' 00:36:23.658 02:12:05 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2449228 00:36:23.658 02:12:05 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:23.658 02:12:05 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:23.658 02:12:05 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2449228 00:36:23.917 02:12:05 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:23.917 02:12:05 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:23.917 02:12:05 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2449228' 00:36:23.917 killing process with pid 2449228 00:36:23.917 02:12:05 keyring_file -- common/autotest_common.sh@969 -- # kill 2449228 00:36:23.917 Received shutdown signal, test time was about 1.000000 seconds 00:36:23.917 00:36:23.917 Latency(us) 00:36:23.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:23.917 =================================================================================================================== 00:36:23.917 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:23.917 02:12:05 keyring_file -- common/autotest_common.sh@974 -- # wait 2449228 00:36:23.917 02:12:05 keyring_file -- keyring/file.sh@21 -- # killprocess 2447649 00:36:23.917 02:12:05 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2447649 ']' 00:36:23.917 02:12:05 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2447649 00:36:23.917 02:12:05 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:23.917 02:12:05 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:23.917 02:12:05 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2447649 00:36:23.917 02:12:05 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:23.917 02:12:05 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:23.917 02:12:05 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2447649' 00:36:23.917 killing process with pid 2447649 00:36:23.917 02:12:05 keyring_file -- common/autotest_common.sh@969 -- # kill 2447649 00:36:23.917 [2024-07-26 02:12:05.911377] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:23.917 02:12:05 keyring_file -- common/autotest_common.sh@974 -- # wait 2447649 00:36:24.486 00:36:24.486 real 0m14.275s 00:36:24.486 user 0m35.623s 00:36:24.486 sys 0m3.352s 00:36:24.486 02:12:06 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:24.486 02:12:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:24.486 ************************************ 00:36:24.486 END TEST keyring_file 00:36:24.486 ************************************ 00:36:24.486 02:12:06 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:36:24.486 02:12:06 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:24.486 02:12:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:24.486 02:12:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:24.486 02:12:06 -- common/autotest_common.sh@10 -- # set +x 00:36:24.486 ************************************ 00:36:24.486 START TEST keyring_linux 00:36:24.486 ************************************ 00:36:24.486 02:12:06 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:24.486 * Looking for test storage... 00:36:24.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:24.486 02:12:06 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:24.486 02:12:06 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.486 02:12:06 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.486 02:12:06 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.486 02:12:06 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.486 02:12:06 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.486 02:12:06 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.486 02:12:06 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.486 02:12:06 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:24.486 02:12:06 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:24.486 02:12:06 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:24.486 02:12:06 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:24.486 02:12:06 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:24.486 02:12:06 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:24.486 02:12:06 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:24.486 02:12:06 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:24.486 02:12:06 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:24.486 02:12:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:24.486 02:12:06 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:24.486 02:12:06 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:24.486 02:12:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:24.486 02:12:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:24.486 02:12:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:24.486 02:12:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:24.486 02:12:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:24.486 02:12:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:24.486 /tmp/:spdk-test:key0 00:36:24.486 02:12:06 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:24.486 02:12:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:24.486 02:12:06 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:24.486 02:12:06 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:24.486 02:12:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:24.487 02:12:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:24.487 02:12:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:24.487 02:12:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:24.487 02:12:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:24.487 02:12:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:24.487 02:12:06 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:24.487 02:12:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:24.487 02:12:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:24.751 02:12:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:24.751 02:12:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:24.751 /tmp/:spdk-test:key1 00:36:24.751 02:12:06 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2449592 00:36:24.751 02:12:06 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:24.751 02:12:06 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2449592 00:36:24.751 02:12:06 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2449592 ']' 00:36:24.751 02:12:06 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:24.751 02:12:06 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:24.751 02:12:06 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:24.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:24.751 02:12:06 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:24.751 02:12:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:24.751 [2024-07-26 02:12:06.574677] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:36:24.751 [2024-07-26 02:12:06.574771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2449592 ] 00:36:24.751 EAL: No free 2048 kB hugepages reported on node 1 00:36:24.751 [2024-07-26 02:12:06.633281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.751 [2024-07-26 02:12:06.721039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:25.013 02:12:06 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:25.013 02:12:06 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:36:25.013 02:12:06 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:25.013 02:12:06 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.013 02:12:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:25.013 [2024-07-26 02:12:06.976637] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:25.013 null0 00:36:25.013 [2024-07-26 02:12:07.008733] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:25.013 [2024-07-26 02:12:07.009232] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:25.270 02:12:07 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.270 02:12:07 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:25.270 905978420 00:36:25.270 02:12:07 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:25.270 260929800 00:36:25.270 02:12:07 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2449726 00:36:25.270 02:12:07 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:25.270 02:12:07 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2449726 /var/tmp/bperf.sock 00:36:25.270 02:12:07 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2449726 ']' 00:36:25.270 02:12:07 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:25.270 02:12:07 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:25.270 02:12:07 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:25.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:25.270 02:12:07 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:25.270 02:12:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:25.270 [2024-07-26 02:12:07.075526] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 23.11.0 initialization... 00:36:25.270 [2024-07-26 02:12:07.075592] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2449726 ] 00:36:25.270 EAL: No free 2048 kB hugepages reported on node 1 00:36:25.270 [2024-07-26 02:12:07.136669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:25.270 [2024-07-26 02:12:07.237431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:25.528 02:12:07 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:25.528 02:12:07 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:36:25.528 02:12:07 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:25.528 02:12:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:25.785 02:12:07 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:25.785 02:12:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:26.043 02:12:07 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:26.043 02:12:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:26.300 [2024-07-26 02:12:08.139509] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:26.300 nvme0n1 00:36:26.300 02:12:08 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:26.301 02:12:08 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:26.301 02:12:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:26.301 02:12:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:26.301 02:12:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:26.301 02:12:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.558 02:12:08 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:26.558 02:12:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:26.558 02:12:08 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:26.558 02:12:08 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:26.558 02:12:08 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:26.558 02:12:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.558 02:12:08 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:26.818 02:12:08 keyring_linux -- keyring/linux.sh@25 -- # sn=905978420 00:36:26.818 02:12:08 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:26.818 02:12:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:26.818 02:12:08 keyring_linux -- keyring/linux.sh@26 -- # [[ 905978420 == \9\0\5\9\7\8\4\2\0 ]] 00:36:26.818 02:12:08 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 905978420 00:36:26.818 02:12:08 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:26.818 02:12:08 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:27.107 Running I/O for 1 seconds... 00:36:28.039 00:36:28.039 Latency(us) 00:36:28.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:28.039 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:28.039 nvme0n1 : 1.01 6236.25 24.36 0.00 0.00 20381.78 8543.95 30486.38 00:36:28.039 =================================================================================================================== 00:36:28.039 Total : 6236.25 24.36 0.00 0.00 20381.78 8543.95 30486.38 00:36:28.039 0 00:36:28.039 02:12:09 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:28.039 02:12:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:28.296 02:12:10 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:28.296 02:12:10 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:28.296 02:12:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:28.296 02:12:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:28.296 02:12:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:28.296 02:12:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.554 02:12:10 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:28.554 02:12:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:28.554 02:12:10 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:28.554 02:12:10 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:28.554 02:12:10 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:36:28.554 02:12:10 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:28.554 02:12:10 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:28.554 02:12:10 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:28.554 02:12:10 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:28.554 02:12:10 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:28.554 02:12:10 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:28.554 02:12:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:28.812 [2024-07-26 02:12:10.635433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f3860 (107): Transport endpoint is not connected 00:36:28.812 [2024-07-26 02:12:10.635444] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:28.812 [2024-07-26 02:12:10.636418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f3860 (9): Bad file descriptor 00:36:28.812 [2024-07-26 02:12:10.637416] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:28.812 [2024-07-26 02:12:10.637441] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:28.812 [2024-07-26 02:12:10.637457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:28.812 request: 00:36:28.812 { 00:36:28.812 "name": "nvme0", 00:36:28.812 "trtype": "tcp", 00:36:28.812 "traddr": "127.0.0.1", 00:36:28.812 "adrfam": "ipv4", 00:36:28.812 "trsvcid": "4420", 00:36:28.812 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:28.812 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:28.812 "prchk_reftag": false, 00:36:28.812 "prchk_guard": false, 00:36:28.812 "hdgst": false, 00:36:28.812 "ddgst": false, 00:36:28.812 "psk": ":spdk-test:key1", 00:36:28.812 "method": "bdev_nvme_attach_controller", 00:36:28.812 "req_id": 1 00:36:28.812 } 00:36:28.812 Got JSON-RPC error response 00:36:28.812 response: 00:36:28.812 { 00:36:28.812 "code": -5, 00:36:28.812 "message": "Input/output error" 00:36:28.812 } 00:36:28.812 02:12:10 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:36:28.812 02:12:10 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:28.812 02:12:10 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:28.812 02:12:10 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:28.812 02:12:10 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:28.812 02:12:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:28.812 02:12:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:28.812 02:12:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:28.812 02:12:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:28.812 02:12:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:28.812 02:12:10 keyring_linux -- keyring/linux.sh@33 -- # sn=905978420 00:36:28.812 02:12:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 905978420 00:36:28.812 1 links removed 00:36:28.812 02:12:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:28.812 02:12:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:28.812 02:12:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:28.812 02:12:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:28.812 02:12:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:28.812 02:12:10 keyring_linux -- keyring/linux.sh@33 -- # sn=260929800 00:36:28.812 02:12:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 260929800 00:36:28.812 1 links removed 00:36:28.812 02:12:10 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2449726 00:36:28.812 02:12:10 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2449726 ']' 00:36:28.812 02:12:10 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2449726 00:36:28.812 02:12:10 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:28.812 02:12:10 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:28.812 02:12:10 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2449726 00:36:28.812 02:12:10 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:28.812 02:12:10 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:28.812 02:12:10 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2449726' 00:36:28.812 killing process with pid 2449726 00:36:28.812 02:12:10 keyring_linux -- common/autotest_common.sh@969 -- # kill 2449726 00:36:28.812 Received shutdown signal, test time was about 1.000000 seconds 00:36:28.812 00:36:28.812 Latency(us) 00:36:28.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:28.812 =================================================================================================================== 00:36:28.812 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:28.812 02:12:10 keyring_linux -- common/autotest_common.sh@974 -- # wait 2449726 00:36:29.071 02:12:10 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2449592 00:36:29.071 02:12:10 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2449592 ']' 00:36:29.071 02:12:10 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2449592 00:36:29.071 02:12:10 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:29.071 02:12:10 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:29.071 02:12:10 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2449592 00:36:29.071 02:12:10 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:29.071 02:12:10 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:29.071 02:12:10 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2449592' 00:36:29.071 killing process with pid 2449592 00:36:29.071 02:12:10 keyring_linux -- common/autotest_common.sh@969 -- # kill 2449592 00:36:29.071 02:12:10 keyring_linux -- common/autotest_common.sh@974 -- # wait 2449592 00:36:29.637 00:36:29.637 real 0m4.982s 00:36:29.637 user 0m9.420s 00:36:29.637 sys 0m1.638s 00:36:29.637 02:12:11 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:29.637 02:12:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:29.637 ************************************ 00:36:29.637 END TEST keyring_linux 00:36:29.637 ************************************ 00:36:29.637 02:12:11 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:29.637 02:12:11 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:29.637 02:12:11 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:36:29.637 02:12:11 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:36:29.637 02:12:11 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:36:29.637 02:12:11 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:29.637 02:12:11 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:29.637 02:12:11 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:29.637 02:12:11 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:36:29.637 02:12:11 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:29.637 02:12:11 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:36:29.637 02:12:11 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:29.637 02:12:11 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:29.637 02:12:11 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:29.637 02:12:11 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:36:29.637 02:12:11 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:36:29.637 02:12:11 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:36:29.637 02:12:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:29.637 02:12:11 -- common/autotest_common.sh@10 -- # set +x 00:36:29.637 02:12:11 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:36:29.638 02:12:11 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:36:29.638 02:12:11 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:36:29.638 02:12:11 -- common/autotest_common.sh@10 -- # set +x 00:36:31.540 INFO: APP EXITING 00:36:31.540 INFO: killing all VMs 00:36:31.540 INFO: killing vhost app 00:36:31.540 INFO: EXIT DONE 00:36:32.106 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:36:32.106 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:32.363 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:32.364 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:32.364 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:32.364 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:32.364 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:32.364 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:32.364 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:32.364 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:32.364 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:32.364 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:32.364 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:32.364 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:32.364 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:32.364 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:32.364 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:33.742 Cleaning 00:36:33.742 Removing: /var/run/dpdk/spdk0/config 00:36:33.742 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:33.742 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:33.742 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:33.742 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:33.742 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:33.742 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:33.742 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:33.742 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:33.742 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:33.742 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:33.742 Removing: /var/run/dpdk/spdk1/config 00:36:33.742 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:33.742 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:33.742 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:33.742 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:33.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:33.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:33.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:33.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:33.743 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:33.743 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:33.743 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:33.743 Removing: /var/run/dpdk/spdk2/config 00:36:33.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:33.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:33.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:33.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:33.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:33.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:33.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:33.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:33.743 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:33.743 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:33.743 Removing: /var/run/dpdk/spdk3/config 00:36:33.743 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:33.743 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:33.743 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:33.743 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:33.743 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:33.743 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:33.743 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:33.743 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:33.743 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:33.743 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:33.743 Removing: /var/run/dpdk/spdk4/config 00:36:33.743 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:33.743 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:33.743 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:33.743 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:33.743 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:33.743 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:33.743 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:33.743 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:33.743 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:33.743 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:33.743 Removing: /dev/shm/bdev_svc_trace.1 00:36:33.743 Removing: /dev/shm/nvmf_trace.0 00:36:33.743 Removing: /dev/shm/spdk_tgt_trace.pid2134123 00:36:33.743 Removing: /var/run/dpdk/spdk0 00:36:33.743 Removing: /var/run/dpdk/spdk1 00:36:33.743 Removing: /var/run/dpdk/spdk2 00:36:33.743 Removing: /var/run/dpdk/spdk3 00:36:33.743 Removing: /var/run/dpdk/spdk4 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2132579 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2133308 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2134123 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2134562 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2135249 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2135388 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2136104 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2136113 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2136355 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2137557 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2138463 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2138772 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2138959 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2139159 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2139347 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2139510 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2139662 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2139848 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2140158 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2142941 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2143291 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2143499 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2143585 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2143897 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2144020 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2144330 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2144453 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2144624 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2144634 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2144856 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2144929 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2145308 00:36:33.743 Removing: /var/run/dpdk/spdk_pid2145467 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2145672 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2147730 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2150345 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2157207 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2157616 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2160124 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2160284 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2162903 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2166623 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2168684 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2174954 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2180773 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2182095 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2182765 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2192990 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2195260 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2248681 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2251958 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2255796 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2259510 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2259518 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2260168 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2260821 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2261359 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2261760 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2261881 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2262022 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2262150 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2262157 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2262816 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2263354 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2264006 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2264411 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2264416 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2264665 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2265547 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2266265 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2272083 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2297361 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2300152 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2301217 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2302531 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2302657 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2302775 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2302820 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2303251 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2304558 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2305164 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2305591 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2307203 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2307628 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2308069 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2310573 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2313825 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2317352 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2340842 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2343483 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2347375 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2348323 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2349330 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2351984 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2354842 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2358928 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2358968 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2361693 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2361910 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2362086 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2362347 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2362356 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2363427 00:36:34.002 Removing: /var/run/dpdk/spdk_pid2364608 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2365786 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2366964 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2368164 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2369430 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2373118 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2373543 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2374846 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2375583 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2379261 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2381143 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2385160 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2388605 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2394818 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2399178 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2399185 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2411376 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2411858 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2412307 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2412717 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2413291 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2413702 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2414107 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2414519 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2417008 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2417167 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2421547 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2421720 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2423322 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2428231 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2428338 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2431130 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2432527 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2433923 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2434662 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2436065 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2436943 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2442214 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2442595 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2442989 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2444542 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2444936 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2445216 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2447649 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2447667 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2449228 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2449592 00:36:34.003 Removing: /var/run/dpdk/spdk_pid2449726 00:36:34.003 Clean 00:36:34.261 02:12:16 -- common/autotest_common.sh@1451 -- # return 0 00:36:34.261 02:12:16 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:36:34.261 02:12:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:34.261 02:12:16 -- common/autotest_common.sh@10 -- # set +x 00:36:34.261 02:12:16 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:36:34.261 02:12:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:34.261 02:12:16 -- common/autotest_common.sh@10 -- # set +x 00:36:34.262 02:12:16 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:34.262 02:12:16 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:34.262 02:12:16 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:34.262 02:12:16 -- spdk/autotest.sh@395 -- # hash lcov 00:36:34.262 02:12:16 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:34.262 02:12:16 -- spdk/autotest.sh@397 -- # hostname 00:36:34.262 02:12:16 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:34.519 geninfo: WARNING: invalid characters removed from testname! 00:37:06.578 02:12:43 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:06.578 02:12:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:08.477 02:12:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:11.757 02:12:53 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:14.323 02:12:56 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:17.601 02:12:59 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:20.121 02:13:01 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:20.121 02:13:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:20.121 02:13:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:20.121 02:13:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:20.121 02:13:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:20.121 02:13:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.121 02:13:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.121 02:13:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.121 02:13:01 -- paths/export.sh@5 -- $ export PATH 00:37:20.121 02:13:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.121 02:13:01 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:20.121 02:13:01 -- common/autobuild_common.sh@447 -- $ date +%s 00:37:20.121 02:13:01 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721952781.XXXXXX 00:37:20.121 02:13:02 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721952781.UfEaBe 00:37:20.121 02:13:02 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:37:20.121 02:13:02 -- common/autobuild_common.sh@453 -- $ '[' -n v23.11 ']' 00:37:20.121 02:13:02 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:20.121 02:13:02 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:20.121 02:13:02 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:20.121 02:13:02 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:20.121 02:13:02 -- common/autobuild_common.sh@463 -- $ get_config_params 00:37:20.121 02:13:02 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:37:20.121 02:13:02 -- common/autotest_common.sh@10 -- $ set +x 00:37:20.121 02:13:02 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:20.121 02:13:02 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:37:20.121 02:13:02 -- pm/common@17 -- $ local monitor 00:37:20.121 02:13:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:20.121 02:13:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:20.121 02:13:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:20.121 02:13:02 -- pm/common@21 -- $ date +%s 00:37:20.121 02:13:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:20.121 02:13:02 -- pm/common@21 -- $ date +%s 00:37:20.121 02:13:02 -- pm/common@25 -- $ sleep 1 00:37:20.121 02:13:02 -- pm/common@21 -- $ date +%s 00:37:20.121 02:13:02 -- pm/common@21 -- $ date +%s 00:37:20.121 02:13:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721952782 00:37:20.121 02:13:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721952782 00:37:20.121 02:13:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721952782 00:37:20.122 02:13:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721952782 00:37:20.122 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721952782_collect-vmstat.pm.log 00:37:20.122 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721952782_collect-cpu-load.pm.log 00:37:20.122 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721952782_collect-cpu-temp.pm.log 00:37:20.122 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721952782_collect-bmc-pm.bmc.pm.log 00:37:21.058 02:13:03 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:37:21.058 02:13:03 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:21.058 02:13:03 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:21.058 02:13:03 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:21.058 02:13:03 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:21.058 02:13:03 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:21.058 02:13:03 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:21.058 02:13:03 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:21.058 02:13:03 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:21.058 02:13:03 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:21.058 02:13:03 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:21.058 02:13:03 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:21.058 02:13:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:21.058 02:13:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:21.058 02:13:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:21.058 02:13:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:21.058 02:13:03 -- pm/common@44 -- $ pid=2461351 00:37:21.058 02:13:03 -- pm/common@50 -- $ kill -TERM 2461351 00:37:21.058 02:13:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:21.058 02:13:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:21.058 02:13:03 -- pm/common@44 -- $ pid=2461353 00:37:21.058 02:13:03 -- pm/common@50 -- $ kill -TERM 2461353 00:37:21.058 02:13:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:21.058 02:13:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:21.058 02:13:03 -- pm/common@44 -- $ pid=2461355 00:37:21.058 02:13:03 -- pm/common@50 -- $ kill -TERM 2461355 00:37:21.058 02:13:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:21.058 02:13:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:21.058 02:13:03 -- pm/common@44 -- $ pid=2461383 00:37:21.317 02:13:03 -- pm/common@50 -- $ sudo -E kill -TERM 2461383 00:37:21.317 + [[ -n 2028364 ]] 00:37:21.317 + sudo kill 2028364 00:37:21.327 [Pipeline] } 00:37:21.346 [Pipeline] // stage 00:37:21.351 [Pipeline] } 00:37:21.369 [Pipeline] // timeout 00:37:21.374 [Pipeline] } 00:37:21.391 [Pipeline] // catchError 00:37:21.396 [Pipeline] } 00:37:21.414 [Pipeline] // wrap 00:37:21.438 [Pipeline] } 00:37:21.481 [Pipeline] // catchError 00:37:21.487 [Pipeline] stage 00:37:21.488 [Pipeline] { (Epilogue) 00:37:21.495 [Pipeline] catchError 00:37:21.496 [Pipeline] { 00:37:21.503 [Pipeline] echo 00:37:21.504 Cleanup processes 00:37:21.506 [Pipeline] sh 00:37:21.786 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:21.786 2461492 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:21.786 2461616 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:21.799 [Pipeline] sh 00:37:22.083 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:22.083 ++ grep -v 'sudo pgrep' 00:37:22.083 ++ awk '{print $1}' 00:37:22.083 + sudo kill -9 2461492 00:37:22.094 [Pipeline] sh 00:37:22.376 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:32.351 [Pipeline] sh 00:37:32.637 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:32.637 Artifacts sizes are good 00:37:32.651 [Pipeline] archiveArtifacts 00:37:32.659 Archiving artifacts 00:37:32.887 [Pipeline] sh 00:37:33.187 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:33.202 [Pipeline] cleanWs 00:37:33.212 [WS-CLEANUP] Deleting project workspace... 00:37:33.212 [WS-CLEANUP] Deferred wipeout is used... 00:37:33.219 [WS-CLEANUP] done 00:37:33.221 [Pipeline] } 00:37:33.234 [Pipeline] // catchError 00:37:33.245 [Pipeline] sh 00:37:33.525 + logger -p user.info -t JENKINS-CI 00:37:33.535 [Pipeline] } 00:37:33.548 [Pipeline] // stage 00:37:33.552 [Pipeline] } 00:37:33.565 [Pipeline] // node 00:37:33.570 [Pipeline] End of Pipeline 00:37:33.591 Finished: SUCCESS